Updates from: 06/30/2021 03:07:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
Previously updated : 06/11/2021 Last updated : 06/28/2021
The following process is used when a user signs in with a FIDO2 security key:
While there are many keys that are FIDO2 certified by the FIDO Alliance, Microsoft requires some optional extensions of the FIDO2 Client-to-Authenticator Protocol (CTAP) specification to be implemented by the vendor to ensure maximum security and the best experience.
-A security key **must** implement the following features and extensions from the FIDO2 CTAP protocol to be Microsoft-compatible. Authenticator vendor must implement both FIDO_2_0 and FIDO_2_1 version of the spec. For more information, see the [Client to Authenticator Protocol](https://fidoalliance.org/specs/fido-v2.1-rd-20210309/fido-client-to-authenticator-protocol-v2.1-rd-20210309.html).
+A security key MUST implement the following features and extensions from the FIDO2 CTAP protocol to be Microsoft-compatible. Authenticator vendor must implement both FIDO_2_0 and FIDO_2_1 version of the spec. For more information, see the [Client to Authenticator Protocol](https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-client-to-authenticator-protocol-v2.1-ps-20210615.html).
| # | Feature / Extension trust | Why is this feature or extension required? | | | | | | 1 | Resident/Discoverable key | This feature enables the security key to be portable, where your credential is stored on the security key and is discoverable which makes usernameless flows possible. |
-| 2 | Client pin | This feature enables you to protect your credentials with a second factor and applies to security keys that do not have a user interface.<br>Both [PIN protocol 1](https://fidoalliance.org/specs/fido-v2.1-rd-20210309/fido-client-to-authenticator-protocol-v2.1-rd-20210309.html#pinProto1) and [PIN protocol 2](https://fidoalliance.org/specs/fido-v2.1-rd-20210309/fido-client-to-authenticator-protocol-v2.1-rd-20210309.html#pinProto2) **must** be implemented. |
+| 2 | Client pin | This feature enables you to protect your credentials with a second factor and applies to security keys that do not have a user interface.<br>Both [PIN protocol 1](https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-client-to-authenticator-protocol-v2.1-ps-20210615.html#pinProto1) and [PIN protocol 2](https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-client-to-authenticator-protocol-v2.1-ps-20210615.html#pinProto2) MUST be implemented. |
| 3 | hmac-secret | This extension ensures you can sign in to your device when it's off-line or in airplane mode. | | 4 | Multiple accounts per RP | This feature ensures you can use the same security key across multiple services like Microsoft Account and Azure Active Directory. |
-| 5 | Credential Management | This feature allows users to manage their credentials on security keys on platforms and applies to security keys that do not have this capability built-in. |
-| 6 | Bio Enrollment | This feature allows users to enroll their biometrics on their authenticators and applies to security keys that do not have this capability built in.<br> Authenticator **must** implement [authenicatorBioEnrollment](https://fidoalliance.org/specs/fido-v2.1-rd-20210309/fido-client-to-authenticator-protocol-v2.1-rd-20210309.html#authenticatorBioEnrollment) command for this feature. Authenticator vendors are highly encouraged to implement [userVerificationMgmtPreview](https://fidoalliance.org/specs/fido-v2.1-rd-20210309/fido-client-to-authenticator-protocol-v2.1-rd-20210309.html#prototypeAuthenticatorBioEnrollment) command also so that users can enroll bio templates it on all previous OS versions. |
+| 5 | Credential Management | This feature allows users to manage their credentials on security keys on platforms and applies to security keys that do not have this capability built-in.<br>Authenticator MUST implement [authenticatorCredentialManagement](https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-client-to-authenticator-protocol-v2.1-ps-20210615.html#authenticatorCredentialManagement) and [credentialMgmtPreview](https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-client-to-authenticator-protocol-v2.1-ps-20210615.html#prototypeAuthenticatorCredentialManagement) commands for this feature. |
+| 6 | Bio Enrollment | This feature allows users to enroll their biometrics on their authenticators and applies to security keys that do not have this capability built in.<br> Authenticator MUST implement [authenicatorBioEnrollment](https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-client-to-authenticator-protocol-v2.1-ps-20210615.html#authenticatorBioEnrollment) and [userVerificationMgmtPreview](https://fidoalliance.org/specs/fido-v2.1-ps-20210615/fido-client-to-authenticator-protocol-v2.1-ps-20210615.html#prototypeAuthenticatorBioEnrollment) commands for this feature. |
| 7 | pinUvAuthToken | This feature allows platform to have auth tokens using PIN or BIO match which helps in better user experience when multiple credentials are present on the authenticator. | | 8 | forcePinChange | This feature allows enterprises to ask users to change their PIN in remote deployments. |
-| 9 | setMinPINLength | This feature allows enterprises to have custom minimum PIN length for their users. Authenticator MUST implement minPinLength extension also. |
-| 10 | alwaysUV | This feature allows enterprises or users to always require user verification to use this security key. Authenticator MUST implement toggleAlwaysUv subcommand. |
-| 11 | credBlob | This extension allows websites to store small information along with the security key. |
+| 9 | setMinPINLength | This feature allows enterprises to have custom minimum PIN length for their users. Authenticator MUST implement minPinLength extension and have maxRPIDsForSetMinPINLength of value at least 1. |
+| 10 | alwaysUV | This feature allows enterprises or users to always require user verification to use this security key. Authenticator MUST implement toggleAlwaysUv subcommand. It is up to vendor to decide the default value of alwaysUV. At this point due to nature of various RPs adoption and OS versions, recommended value for biometric based authenticators is true and non-biometric based authenticators is false. |
+| 11 | credBlob | This extension allows websites to store small information in the security key. maxCredBlobLength MUST be atleast 32 bytes. |
+| 12 | largeBlob | This extension allows websites to store larger information like certificates in the security key. maxSerializedLargeBlobArray MUST be atleast 1024 bytes. |
+ ### FIDO2 security key providers
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-policy.md
You can also use PowerShell cmdlets to remove the never-expires configuration or
This guidance applies to other providers, such as Intune and Microsoft 365, which also rely on Azure AD for identity and directory services. Password expiration is the only part of the policy that can be changed. > [!NOTE]
-> By default only passwords for user accounts that aren't synchronized through Azure AD Connect can be configured to not expire. For more information about directory synchronization, see [Connect AD with Azure AD](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-password-hash-synchronization#password-expiration-policy).
+> By default only passwords for user accounts that aren't synchronized through Azure AD Connect can be configured to not expire. For more information about directory synchronization, see [Connect AD with Azure AD](../hybrid/how-to-connect-password-hash-synchronization.md#password-expiration-policy).
### Set or check the password policies by using PowerShell
active-directory Howto Registration Mfa Sspr Combined https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-registration-mfa-sspr-combined.md
Previously updated : 01/27/2021 Last updated : 06/28/2021
To enable combined registration, complete these steps:
If you have configured the *Site to Zone Assignment List* in Internet Explorer, the following sites have to be in the same zone: * *[https://login.microsoftonline.com](https://login.microsoftonline.com)*
+* *[https://Login.windows.net](https://login.windows.net)*
* *[https://mysignins.microsoft.com](https://mysignins.microsoft.com)* * *[https://account.activedirectory.windowsazure.com](https://account.activedirectory.windowsazure.com)*
active-directory Tutorial Enable Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/tutorial-enable-azure-mfa.md
Previously updated : 07/13/2020 Last updated : 06/29/2021
To complete this tutorial, you need the following resources and privileges:
* A working Azure AD tenant with at least an Azure AD Premium P1 or trial license enabled. * If needed, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An account with *global administrator* privileges.
+* An account with *global administrator* privileges. Some MFA settings can also be managed by an Authentication Policy Administrator. For more information, see [Authentication Policy Administrator](../roles/permissions-reference.md#authentication-policy-administrator).
* A non-administrator user with a password you know, such as *testuser*. You test the end-user Azure AD Multi-Factor Authentication experience using this account in this tutorial. * If you need to create a user, see [Quickstart: Add new users to Azure Active Directory](../fundamentals/add-users-azure-active-directory.md). * A group that the non-administrator user is a member of, such as *MFA-Test-Group*. You enable Azure AD Multi-Factor Authentication for this group in this tutorial.
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Administrators can assign a Conditional Access policy to the following cloud app
- Virtual Private Network (VPN) - Windows Defender ATP
-Applications that are available to Conditional Access have gone through an onboarding and validation process. This list doesn't include all Microsoft apps, as many are backend services and not meant to have policy directly applied to them. If you're looking for an application that is missing, you can contact the specific application team or make a request on [UserVoice](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=167259).
+> [!IMPORTANT]
+> Applications that are available to Conditional Access have gone through an onboarding and validation process. This list doesn't include all Microsoft apps, as many are backend services and not meant to have policy directly applied to them. If you're looking for an application that is missing, you can contact the specific application team or make a request on [UserVoice](https://feedback.azure.com/forums/169401-azure-active-directory?category_id=167259).
### Office 365
For more information about authentication context use in applications, see the f
- [Conditional Access: Conditions](concept-conditional-access-conditions.md) - [Conditional Access common policies](concept-conditional-access-policy-common.md)-- [Client application dependencies](service-dependencies.md)
+- [Client application dependencies](service-dependencies.md)
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/assign-local-admin.md
This article explains how the local administrators membership update works and h
When you connect a Windows device with Azure AD using an Azure AD join, Azure AD adds the following security principals to the local administrators group on the device: - The Azure AD global administrator role-- The Azure AD device administrator role
+- The Azure AD joined device local administrator role
- The user performing the Azure AD join
-By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD device administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device.
+By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device.
## Manage the global administrators role
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new.md
For more information about how to better secure your organization by using autom
In December 2020 we have added following 18 new applications in our App gallery with Federation support:
-[AwareGo](../saas-apps/awarego-tutorial.md), [HowNow SSO](https://gethownow.com/), [ZyLAB ONE Legal Hold](https://www.zylab.com/en/product/legal-hold), [Guider](http://www.guider-ai.com/), [Softcrisis](https://www.softcrisis.se/sv/), [Pims 365](http://www.omega365.com/pims), [InformaCast](../saas-apps/informacast-tutorial.md), [RetrieverMediaDatabase](../saas-apps/retrievermediadatabase-tutorial.md), [vonage](../saas-apps/vonage-tutorial.md), [Count Me In - Operations Dashboard](../saas-apps/count-me-in-operations-dashboard-tutorial.md), [ProProfs Knowledge Base](../saas-apps/proprofs-knowledge-base-tutorial.md), [RightCrowd Workforce Management](../saas-apps/rightcrowd-workforce-management-tutorial.md), [JLL TRIRIGA](../saas-apps/jll-tririga-tutorial.md), [Shutterstock](../saas-apps/shutterstock-tutorial.md), [FortiWeb Web Application Firewall](../saas-apps/linkedin-talent-solutions-tutorial.md), [LinkedIn Talent Solutions](../saas-apps/linkedin-talent-solutions-tutorial.md), [Equinix Federation App](../saas-apps/equinix-federation-app-tutorial.md), [KFAdvance](../saas-apps/kfadvance-tutorial.md)
+[AwareGo](../saas-apps/awarego-tutorial.md), [HowNow SSO](https://gethownow.com/), [ZyLAB ONE Legal Hold](https://www.zylab.com/en/product/legal-hold), [Guider](http://www.guider-ai.com/), [Softcrisis](https://www.softcrisis.se/sv/), [Pims 365](https://omega.pims365.no/), [InformaCast](../saas-apps/informacast-tutorial.md), [RetrieverMediaDatabase](../saas-apps/retrievermediadatabase-tutorial.md), [vonage](../saas-apps/vonage-tutorial.md), [Count Me In - Operations Dashboard](../saas-apps/count-me-in-operations-dashboard-tutorial.md), [ProProfs Knowledge Base](../saas-apps/proprofs-knowledge-base-tutorial.md), [RightCrowd Workforce Management](../saas-apps/rightcrowd-workforce-management-tutorial.md), [JLL TRIRIGA](../saas-apps/jll-tririga-tutorial.md), [Shutterstock](../saas-apps/shutterstock-tutorial.md), [FortiWeb Web Application Firewall](../saas-apps/linkedin-talent-solutions-tutorial.md), [LinkedIn Talent Solutions](../saas-apps/linkedin-talent-solutions-tutorial.md), [Equinix Federation App](../saas-apps/equinix-federation-app-tutorial.md), [KFAdvance](../saas-apps/kfadvance-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Managed identity type | All Generally Available<br>Global Azure Regions | Azure
For more information, see [Use managed identities with Azure Machine Learning](../../machine-learning/how-to-use-managed-identities.md).
+### Azure Media Services
+
+| Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
+| | :-: | :-: | :-: | :-: |
+| System assigned | ![Available][check] | ![Available][check] | Not Available | ![Available][check] |
+| User assigned | Not Available | Not Available | Not Available | Not Available |
+
+Refer to the following list to configure managed identity for Azure Media Services (in regions where available):
+
+- [Azure CLI](../../media-services/latest/security-access-storage-managed-identity-cli-tutorial.md)
+ ### Azure Policy |Managed identity type | All Generally Available<br>Global Azure Regions | Azure Government | Azure Germany | Azure China 21Vianet |
active-directory Tutorial Linux Vm Access Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-access-key.md
Azure Storage does not natively support Azure AD authentication. However, you c
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). >[!NOTE]
-> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
+> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory.](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
## Get an access token using the VM's identity and use it to call Azure Resource Manager
active-directory Tutorial Linux Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-sas.md
Later we will upload and download a file to the new storage account. Because fil
## Grant your VM's system-assigned managed identity access to use a storage SAS
-Azure Storage does not natively support Azure AD authentication. However, you can use your VM's system-assigned managed identity to retrieve a storage SAS from Resource Manager, then use the SAS to access storage. In this step, you grant your VM's system-assigned managed identity access to your storage account SAS. Grant access by assigning the [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role to the managed-identity at the scope of the resource group that contains your storage account.
+Azure Storage natively supports Azure AD authentication, so you can use your VM's system-assigned managed identity to retrieve a storage SAS from Resource Manager, then use the SAS to access storage. In this step, you grant your VM's system-assigned managed identity access to your storage account SAS. Grant access by assigning the [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role to the managed-identity at the scope of the resource group that contains your storage account.
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).ΓÇ¥
+For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
>[!NOTE]
-> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
+> For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory.](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
## Get an access token using the VM's identity and use it to call Azure Resource Manager
active-directory Tutorial Linux Vm Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage.md
Files require blob storage so you need to create a blob container in which to st
You can use the VM's managed identity to retrieve the data in the Azure storage blob. Managed identities for Azure resources, can be used to authenticate to resources that support Azure AD authentication. Grant access by assigning the [storage-blob-data-reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role to the managed-identity at the scope of the resource group that contains your storage account.
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).ΓÇ¥
+For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
>[!NOTE] > For more information on the various roles that you can use to grant permissions to storage review [Authorize access to blobs and queues using Azure Active Directory](../../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights)
active-directory Pim How To Start Security Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-start-security-review.md
This article describes how to create one or more access reviews for privileged A
10. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, another User administrator, or another Global administrator can stop the series after creation by changing the date in **Settings**, so that it ends on that date.
-11. In the **Users Scope** section, select the scope of the review. To review users and groups with access to the Azure AD role, select **Users and Groups**, or select **(Preview) Service Principals** to review the machine accounts with access to the Azure AD role.
+11. In the **Users Scope** section, select the scope of the review. To review users and groups with access to the Azure AD role, select **Users and Groups**, or select **(Preview) Service Principals** to review the machine accounts with access to the Azure AD role.
+
+ When **Users and Groups** is selected, membership of groups assigned to the role will be reviewed as part of the access review. When **Service Principals** is selected, only those with direct membership (not via nested groups) will be reviewed.
![Users scope to review role membership of](./media/pim-how-to-start-security-review/users.png) 12. Under **Review role membership**, select the privileged Azure AD roles to review.
- > [!NOTE]
- > - Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
- > - For roles with groups assigned to them, the access of each group linked with the role under review will be reviewed as a part of the access review.
- If you are creating an access review of **Azure AD roles**, the following shows an example of the Review membership list.
- > [!NOTE] > Selecting more than one role will create multiple access reviews. For example, selecting five roles will create five separate access reviews.
-1. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **(Preview) eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **(Preview) active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
+13. In **assignment type**, scope the review by how the principal was assigned to the role. Choose **(Preview) eligible assignments only** to review eligible assignments (regardless of activation status when the review is created) or **(Preview) active assignments only** to review active assignments. Choose **all active and eligible assignments** to review all assignments regardless of type.
![Reviewers list of assignment types](./media/pim-how-to-start-security-review/assignment-type-select.png)
-13. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
+14. In the **Reviewers** section, select one or more people to review all the users. Or you can select to have the members review their own access.
![Reviewers list of selected users or members (self)](./media/pim-how-to-start-security-review/reviewers.png)
active-directory Pim Resource Roles Start Access Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-resource-roles-start-access-review.md
The need for access to privileged Azure resource roles by employees changes over
1. Use the **End** setting to specify how to end the recurring access review series. The series can end in three ways: it runs continuously to start reviews indefinitely, until a specific date, or after a defined number of occurrences has been completed. You, another User administrator, or another Global administrator can stop the series after creation by changing the date in **Settings**, so that it ends on that date.
-1. In the **Users** section, select the scope of the review. To review users, select **Users or select (Preview) Service Principals** to review the machine accounts with access to the Azure role.
+1. In the **Users** section, select the scope of the review. To review users, select **Users or select (Preview) Service Principals** to review the machine accounts with access to the Azure role.
+
+ When **Users** is selected, membership of groups assigned to the role will be expanded to the individual members of the group. When **Service Principals** is selected, only those with direct membership (not via nested groups) will be reviewed.
![Users scope to review role membership of](./media/pim-resource-roles-start-access-review/users.png)
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-assign-roles.md
Body
## Next steps - [Use Azure AD groups to manage role assignments](groups-concept.md)-- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.yml)
active-directory Admin Units Faq Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/admin-units-faq-troubleshoot.md
- Title: Administrative units troubleshooting and FAQ - Azure Active Directory | Microsoft Docs
-description: Investigate administrative units to grant permissions with restricted scope in Azure Active Directory.
------- Previously updated : 11/04/2020-------
-# Azure AD administrative units: Troubleshooting and FAQ
-
-For more granular administrative control in Azure Active Directory (Azure AD), you can assign users to an Azure AD role with a scope that's limited to one or more administrative units. For sample PowerShell scripts for common tasks, see [Work with administrative units](/powershell/azure/active-directory/working-with-administrative-units).
-
-## Frequently asked questions
-
-**Q: Why am I unable to create an administrative unit?**
-
-**A:** Only a *Global Administrator* or *Privileged Role Administrator* can create an administrative unit in Azure AD. Check to ensure that the user who's trying to create the administrative unit is assigned either the *Global Administrator* or *Privileged Role Administrator* role.
-
-**Q: I added a group to an administrative unit. Why are the group members still not showing up there?**
-
-**A:** When you add a group to an administrative unit, that does not result in all the group's members being added to it. Users must be directly assigned to the administrative unit.
-
-**Q: I just added (or removed) a member of the administrative unit. Why is the member not showing up (or still showing up) on the user interface?**
-
-**A:** Sometimes, the addition or removal of one or more members of an administrative unit might take a few minutes to be reflected on the **Administrative units** pane. Alternatively, you can go directly to the associated resource's properties and see whether the action has been completed. For more information about users and groups in administrative units, see [View a list of administrative units for a user](admin-units-add-manage-users.md) and [View a list of administrative units for a group](admin-units-add-manage-groups.md).
-
-**Q: I am a delegated Password Administrator on an administrative unit. Why am I unable to reset a specific user's password?**
-
-**A:** As an administrator of an administrative unit, you can reset passwords only for users who are assigned to your administrative unit. Make sure that the user whose password reset is failing belongs to the administrative unit to which you've been assigned. If the user belongs to the same administrative unit but you still can't reset the user's password, check the roles that are assigned to the user.
-
-To prevent an elevation of privilege, an administrative unit-scoped administrator can't reset the password of a user who's assigned to a role with an organization-wide scope.
-
-**Q: Why are administrative units necessary? Couldn't we have used security groups as the way to define a scope?**
-
-**A:** Security groups have an existing purpose and authorization model. A *User Administrator*, for example, can manage membership of all security groups in the Azure AD organization. The role might use groups to manage access to applications such as Salesforce. A *User Administrator* should not be able to manage the delegation model itself, which would be the result if security groups were extended to support "resource grouping" scenarios.
-
-Administrative units, such as organizational units in Windows Server Active Directory, are intended to provide a way to scope administration of a wide range of directory objects. Security groups themselves can be members of resource scopes. Using security groups to define the set of security groups that an administrator can manage could become confusing.
-
-**Q: What does it mean to add a group to an administrative unit?**
-
-**A:** Adding a group to an administrative unit brings the group itself into the management scope of any *User Administrator* who is also scoped to that administrative unit. User administrators for the administrative unit can manage the name and membership of the group itself. It does not grant the *User Administrator* permissions to manage the users of the group (for example, to reset their passwords). To grant the *User Administrator* the ability to manage users, the users have to be direct members of the administrative unit.
-
-**Q: Can a resource (user or group) be a member of more than one administrative unit?**
-
-**A:** Yes, a resource can be a member of more than one administrative unit. The resource can be managed by all organization-wide and administrative unit-scoped administrators who have permissions over the resource.
-
-**Q: Are administrative units available in B2C organizations?**
-
-**A:** No, administrative units are not available for B2C organizations.
-
-**Q: Are nested administrative units supported?**
-
-**A:** No, nested administrative units are not supported.
-
-**Q: Are administrative units supported in PowerShell and the Graph API?**
-
-**A:** Yes. You'll find support for administrative units in [PowerShell cmdlet documentation](/powershell/module/Azuread/) and [sample scripts](/powershell/azure/active-directory/working-with-administrative-units).
-
-Find support for the [administrativeUnit resource type](/graph/api/resources/administrativeunit) in Microsoft Graph.
-
-## Next steps
--- [Restrict scope for roles by using administrative units](administrative-units.md)-- [Manage administrative units](admin-units-manage.md)
active-directory Groups Assign Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-assign-role.md
POST https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
## Next steps - [Use Azure AD groups to manage role assignments](groups-concept.md)-- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.yml)
active-directory Groups Create Eligible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-create-eligible.md
For this type of group, `isPublic` will always be false and `isSecurityEnabled`
- [Assign Azure AD roles to groups](groups-assign-role.md) - [Use Azure AD groups to manage role assignments](groups-concept.md)-- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.yml)
active-directory Groups Faq Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-faq-troubleshooting.md
- Title: Troubleshoot Azure AD roles assigned to groups - Azure Active Directory
-description: Learn some common questions and troubleshooting tips for assigning roles to groups in Azure Active Directory.
------- Previously updated : 11/05/2020-------
-# Troubleshoot Azure AD roles assigned to groups
-
-Here are some common questions and troubleshooting tips for assigning Azure Active Directory (Azure AD) roles to Azure AD groups.
-
-**Q:** I'm a Groups Administrator but I can't see the **Azure AD roles can be assigned to the group** switch.
-
-**A:** Only Privileged Role Administrators or Global Administrators can create a group that's eligible for role assignment. Only users in those roles see this control.
-
-**Q:** Who can modify the membership of groups that are assigned to Azure AD roles?
-
-**A:** By default, only Privileged Role Administrator and Global Administrator manage the membership of a role-assignable group, but you can delegate the management of role-assignable groups by adding group owners.
-
-**Q**: I am a Helpdesk Administrator in my organization but I can't update password of a user who is a Directory Readers. Why does that happen?
-
-**A**: The user might have gotten Directory Readers by way of a role-assignable group. All members and owners of a role-assignable groups are protected. Only users in the Privileged Authentication Administrator or Global Administrator roles can reset credentials for a protected user.
-
-**Q:** I can't update password of a user. They don't have any higher privileged role assigned. Why is it happening?
-
-**A:** The user could be an owner of a role-assignable group. We protect owners of role-assignable groups to avoid elevation of privilege. An example might be if a group Contoso_Security_Admins is assigned to Security Administrator role, where Bob is the group owner and Alice is Password Administrator in the organization. If this protection weren't present, Alice could reset Bob's credentials and take over his identity. After that, Alice could add herself or anyone to the group Contoso_Security_Admins group to become a Security Administrator in the organization. To find out if a user is a group owner, get the list of owned objects of that user and see if any of the groups have isAssignableToRole set to true. If yes, then that user is protected and the behavior is by design. Refer to these documentations for getting owned objects:
--- [Get-AzureADUserOwnedObject](/powershell/module/azuread/get-azureaduserownedobject)ΓÇ» -- [List ownedObjects](/graph/api/user-list-ownedobjects?tabs=http)-
-**Q:** Can I create an access review on groups that can be assigned to Azure AD roles (specifically, groups with isAssignableToRole property set to true)?
-
-**A:** Yes, you can. If you are on newest version of Access Review, then your reviewers are directed to My Access by default, and only Global Administrators can create access reviews on role-assignable groups. However, if you are on the older version of Access Review, then your reviewers are directed to the Access Panel by default, and both Global Administrators and User Administrator can create access reviews on role-assignable groups. The new experience will be rolled out to all customers on July 28, 2020 but if youΓÇÖd like to upgrade sooner, make a request to [Azure AD Access Reviews - Updated reviewer experience in My Access Signup](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR5dv-S62099HtxdeKIcgO-NUOFJaRDFDWUpHRk8zQ1BWVU1MMTcyQ1FFUi4u).
-
-**Q:** Can I create an access package and put groups that can be assigned to Azure AD roles in it?
-
-**A:** Yes, you can. Global Administrator and User Administrator have the power to put any group in an access package. Nothing changes for Global Administrator, but there's a slight change in User Administrator role permissions. To put a role-assignable group into an access package, you must be a User Administrator and also owner of the role-assignable group. Here's the full table showing who can create access package in Enterprise License Management:
-
-Azure AD directory role | Entitlement management role | Can add security group\* | Can add Microsoft 365 group\* | Can add app | Can add SharePoint Online site
| | -- | - | -- | --
-Global Administrator | n/a | ✔️ | ✔️ | ✔️ | ✔️
-User Administrator | n/a | ✔️ | ✔️ | ✔️
-Intune Administrator | Catalog owner | ✔️ | ✔️ | &nbsp; | &nbsp;
-Exchange Administrator | Catalog owner | &nbsp; | ✔️ | &nbsp; | &nbsp;
-Teams Administrator | Catalog owner | &nbsp; | ✔️ | &nbsp; | &nbsp;
-SharePoint Administrator | Catalog owner | &nbsp; | ✔️ | &nbsp; | ✔️
-Application Administrator | Catalog owner | &nbsp; | &nbsp; | ✔️ | &nbsp;
-Cloud Application Administrator | Catalog owner | &nbsp; | &nbsp; | ✔️ | &nbsp;
-User | Catalog owner | Only if group owner | Only if group owner | Only if app owner | &nbsp;
-
-\*Group isn't role-assignable; that is, isAssignableToRole = false. If a group is role-assignable, then the person creating the access package must also be owner of the role-assignable group.
-
-**Q:** I can't find "Remove assignment" option in "Assigned Roles". How do I delete role assignment to a user?
-
-**A:** This answer is applicable only to Azure AD Premium P1 organizations.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
-1. Select users and open a user profile.
-1. Select **Assigned roles**.
-1. Select the gear icon. A pane opens that can give this information. There's a "Remove" button beside direct assignments. To remove indirect role assignment, remove the user from the group that has been assigned the role.
-
-**Q:** How do I see all groups that are role-assignable?
-
-**A:** Follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
-1. Select **Groups** > **All groups**.
-1. Select **Add filters**.
-1. Filter to **Role assignable**.
-
-**Q:** How do I know which role are assigned to a principal directly and indirectly?
-
-**A:** Follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and open **Azure Active Directory**.
-1. Select users and open a user profile.
-1. Select **Assigned roles**, and then:
-
- - In Azure AD Premium P1 licensed organizations: Select the gear icon. A pane opens that can give this information.
- - In Azure AD Premium P2 licensed organizations: You'll find direct and inherited license information in the **Membership** column.
-
-**Q:** Why do we enforce creating a new group for assigning it to role?
-
-**A:** If you assign an existing group to a role, the existing group owner could add other members to this group without the new members realizing that they'll have the role. Because role-assignable groups are powerful, we're putting lots of restrictions to protect them. You don't want changes to the group that would be surprising to the person managing the group.
-
-## Next steps
--- [Use Azure AD groups to manage role assignments](groups-concept.md)-- [Create a role-assignable group](groups-create-eligible.md)
active-directory Groups Pim Eligible https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-pim-eligible.md
https://graph.microsoft.com/beta/privilegedAccess/aadroles/roleAssignmentRequest
## Next steps - [Use Azure AD groups to manage role assignments](groups-concept.md)-- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.yml)
- [Configure Azure AD admin role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md) - [Assign Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-assign-roles.md)
active-directory Groups Remove Assignment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-remove-assignment.md
DELETE https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments
## Next steps - [Use Azure AD groups to manage role assignments](groups-concept.md)-- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.yml)
active-directory Groups View Assignments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/groups-view-assignments.md
GET https://graph.microsoft.com/beta/roleManagement/directory/roleAssignments?$f
## Next steps - [Use Azure AD groups to manage role assignments](groups-concept.md)-- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.md)
+- [Troubleshoot Azure AD roles assigned to groups](groups-faq-troubleshooting.yml)
active-directory Enable Your Tenant Verifiable Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/enable-your-tenant-verifiable-credentials.md
Title: Tutorial - Configure your Azure Active Directory to issue verifiable credentials (Preview)
-description: In this tutorial, you build the environment needed to deploy verifiable credentials in your tenant
+ Title: Tutorial - Configure Azure Active Directory to issue verifiable credentials (preview)
+description: In this tutorial, you build the environment needed to deploy verifiable credentials in your tenant.
documentationCenter: ''
Last updated 06/24/2021
-# Customer intent: As an administrator, I want the high-level steps that I should follow so that I can quickly start using verifiable credentials in my own Azure AD
+# Customer intent: As an administrator, I want the high-level steps that I should follow so that I can quickly start using verifiable credentials in my own Azure Active Directory.
-# Tutorial - Configure your Azure Active Directory to issue verifiable credentials (Preview)
+# Tutorial - Configure Azure Active Directory to issue verifiable credentials (preview)
-In this tutorial, we build on the work done as part of the [get started](get-started-verifiable-credentials.md) article and set up your Azure Active Directory (Azure AD) with its own [decentralized identifier](https://www.microsoft.com/security/business/identity-access-management/decentralized-identity-blockchain?rtc=1#:~:text=Decentralized%20identity%20is%20a%20trust,protect%20privacy%20and%20secure%20transactions.) (DID). We use the decentralized identifier to issue a verifiable credential using the sample app and your issuer; however, in this tutorial, we still use the sample Azure B2C tenant for authentication. In our next tutorial, we will take additional steps to get the app configured to work with your Azure AD.
+In this tutorial, you'll build on the work done as part of the [Get started](get-started-verifiable-credentials.md) article and set up Azure Active Directory (Azure AD) with its own [decentralized identifier](https://www.microsoft.com/security/business/identity-access-management/decentralized-identity-blockchain?rtc=1#:~:text=Decentralized%20identity%20is%20a%20trust,protect%20privacy%20and%20secure%20transactions.) (DID). You'll use the DID to issue a verifiable credential by using the sample app and your issuer. In this tutorial, you'll still use the sample Azure B2C tenant for authentication. In the next tutorial, you'll take steps to get the app configured to work with Azure AD.
> [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> This preview version is provided without a service level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-In this article:
+In this article, you:
> [!div class="checklist"]
-> * You create the necessary services to onboard your Azure AD for verifiable credentials
-> * We are creating your DID
-> * We are customizing the Rules and Display files
+> * Create the necessary services to onboard Azure AD for verifiable credentials.
+> * Create your DID.
+> * Customize the rules and display files.
> * Configure verifiable credentials in Azure AD. - ## Prerequisites Before you can successfully complete this tutorial, you must first: -- Complete the [Get started](get-started-verifiable-credentials.md).
+- Complete the steps in the [Get started](get-started-verifiable-credentials.md) tutorial.
- Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure AD with a P2 [license](https://azure.microsoft.com/pricing/details/active-directory/). Follow the [How to create a free developer account](how-to-create-a-free-developer-account.md) if you do not have one.-- An instance of [Azure Key Vault](../../key-vault/general/overview.md) where you have rights to create keys and secrets.
+- Have Azure AD with a P2 [license](https://azure.microsoft.com/pricing/details/active-directory/). If you don't have one, follow the steps in [Create a free developer account](how-to-create-a-free-developer-account.md).
+- Have an instance of [Azure Key Vault](../../key-vault/general/overview.md) where you have rights to create keys and secrets.
## Azure Active Directory
-Before we can start, we need an Azure AD tenant. When your tenant is enabled for verifiable credentials, it is assigned a decentralized identifier (DID) and it is enabled with an issuer service for issuing verifiable credentials. Any verifiable credential you issue is issued by your tenant and its DID. The DID is also used when verifying verifiable credentials.
-If you just created a test Azure subscription, your tenant does not need to be populated with user accounts but you will need to have at least one user test account to complete later tutorials.
+Before you start, you need an Azure AD tenant. When your tenant is enabled for verifiable credentials, it's assigned a DID. It's also enabled with an issuer service for issuing verifiable credentials. Any verifiable credential you issue is issued by your tenant and its DID. The DID is also used when you verify verifiable credentials.
+
+If you just created a test Azure subscription, your tenant doesn't need to be populated with user accounts. You'll need at least one user test account to complete later tutorials.
-## Create a Key Vault
+## Create a key vault
-When working with verifiable credentials, you have complete control and management of the cryptographic keys your tenant uses to digitally sign verifiable credentials. To issue and verify credentials, you must provide Azure AD with access to your own instance of Azure Key Vault.
+When you work with verifiable credentials, you have complete control and management of the cryptographic keys your tenant uses to digitally sign verifiable credentials. To issue and verify credentials, you must provide Azure AD with access to your own instance of Key Vault.
-1. From the Azure portal menu, or from the **Home** page, select **Create a resource**.
-2. In the Search box, enter **key vault**.
-3. From the results list, choose **Key Vault**.
-4. On the Key Vault section, choose **Create**.
-5. On the **Create key vault** section provide the following information:
- - **Subscription**: Choose a subscription.
- - Under **Resource Group**, choose **Create new** and enter a resource group name such as **vc-resource-group**. We are using the same resource group name across multiple articles.
- - **Name**: A unique name is required. We use **woodgroup-vc-kv** so replace this value with your own unique name.
- - In the **Location** pull-down menu, choose a location.
- - Leave the other options to their defaults.
-6. After providing the information above, select **Access Policy**
+1. In the Azure portal, or on the home page, select **Create a resource**.
+1. In the search box, enter **key vault**.
+1. From the results list, select **Key Vault**.
+1. In the **Key Vault** section, select **Create**.
+1. In the **Create key vault** section, provide the following information:
+ - **Subscription**: Select a subscription.
+ - **Resource group**: Select **Create new**, and enter a resource group name such as **vc-resource-group**. We use the same resource group name across multiple articles.
+ - **Name**: A unique name is required. We use **woodgroup-vc-kv**, so replace this value with your own unique name.
+ - **Location**: Select a location.
+ - Leave the other options set to their defaults.
+1. After you provide the information, select **Access Policy**.
- ![create a key vault page](media/enable-your-tenant-verifiable-credentials/create-key-vault.png)
+ ![Screenshot that shows the Create key vault screen.](media/enable-your-tenant-verifiable-credentials/create-key-vault.png)
-7. In the Access Policy screen, choose **Add Access Policy**
+1. On the **Access policy** screen, select **Add Access Policy**.
>[!NOTE]
- > By default the account that creates the key vault is the only one with access. The verifiable credential service needs access to key vault. The key vault must have an access policy allowing the Admin to **create keys**, have the ability to **delete keys** if you opt out, and **sign** to create the domain binding for verifiable credential. If you are using the same account while testing make sure to modify the default policy to grant the account **sign** in addition to the default permissions granted to vault creators.
+ > By default, the account that creates the key vault is the only one with access. The verifiable credential service needs access to the key vault. The key vault must have an access policy that allows the admin to create keys, have the ability to delete keys if you opt out, and permission to sign to create the domain binding for the verifiable credential. If you use the same account while testing, modify the default policy to grant the account **Sign** permission, in addition to the default permissions granted to vault creators.
-8. For the User Admin, make sure the key permissions section has **Create**, **Delete**, and **Sign** enabled. By default Create and Delete are already enabled and Sign should be the only Key Permission that needs to be updated.
+1. For the user admin, make sure the key permissions section has **Create**, **Delete**, and **Sign** enabled. By default, **Create** and **Delete** are already enabled. **Sign** should be the only key permission you need to update.
- ![Key Vault permissions](media/enable-your-tenant-verifiable-credentials/keyvault-access.png)
+ ![Screenshot that shows the key vault permissions.](media/enable-your-tenant-verifiable-credentials/keyvault-access.png)
-9. Select **Review + create**.
-10. Select **Create**.
-11. Go to the vault and take note of the vault name and URI
+1. Select **Review + create**.
+1. Select **Create**.
+1. Go to the vault and make a note of the vault name and URI.
-Take note of the two properties listed below:
+Take note of the following two properties:
-- **Vault Name**: In the example, the value name is **woodgrove-vc-kv**. You use this name for other steps.
+- **Vault Name**: In the example, the key vault name is **woodgrove-vc-kv**. You use this name for other steps.
- **Vault URI**: In the example, this value is https://woodgrove-vc-kv.vault.azure.net/. Applications that use your vault through its REST API must use this URI. >[!NOTE]
-> Each key vault transaction results in additional Azure subscription costs. Review the [Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/) for more details.
+> Each key vault transaction results in more Azure subscription costs. For more information, see the [Key Vault pricing page](https://azure.microsoft.com/pricing/details/key-vault/).
>[!IMPORTANT]
-> During the Azure Active Directory Verifiable Credentials preview, keys and secrets created in your vault should not be modified once created. Deleting, disabling, or updating your keys and secrets invalidates any issued credentials. Do not modify your keys or secrets during the preview.
+> During the Azure Active Directory Verifiable Credentials preview, don't modify keys and secrets created in your vault after they're created. Deleting, disabling, or updating your keys and secrets invalidates any issued credentials.
-## Create a modified rules and display file
+## Create modified rules and display files
-In this section, we use the rules and display files from the [Sample issuer app](https://github.com/Azure-Samples/active-directory-verifiable-credentials/)
+In this section, you'll use the rules and display files from the [sample issuer app](https://github.com/Azure-Samples/active-directory-verifiable-credentials/)
and modify them slightly to create your tenant's first verifiable credential.
-1. Copy both the rules and display json files to a temporary folder and rename them **MyFirstVC-display.json** and **MyFirstVC-rules.json** respectively. You can find both files under **issuer\issuer_config**
+1. Copy both the rules and display JSON files to a temporary folder. Rename them **MyFirstVC-display.json** and **MyFirstVC-rules.json**, respectively. You can find both files under **issuer\issuer_config**.
- ![display and rules files as part of the sample app directory](media/enable-your-tenant-verifiable-credentials/sample-app-rules-display.png)
+ ![Screenshot that shows the rules and display files as part of the sample app directory.](media/enable-your-tenant-verifiable-credentials/sample-app-rules-display.png)
- ![display and rules files in a temp folder](media/enable-your-tenant-verifiable-credentials/display-rules-files-temp.png)
+ ![Screenshot that shows the rules and display files in a temp folder.](media/enable-your-tenant-verifiable-credentials/display-rules-files-temp.png)
-2. Open the MyFirstVC-rules.json file in your code editor.
+1. Open the **MyFirstVC-rules.json** file in your code editor.
```json {
In this section, we use the rules and display files from the [Sample issuer app]
```
- Now let's change the type field to "MyFirstVC".
+ Now, change the "type" field to **"MyFirstVC"**.
```json "type": ["MyFirstVC"]
In this section, we use the rules and display files from the [Sample issuer app]
Save this change. >[!NOTE]
- > We are not changing the **"configuration"** or the **"client_id"** at this point in the tutorial. We still use the Microsoft B2C tenant we used in the [Get started](get-started-verifiable-credentials.md). We will use your Azure AD in the next tutorial.
+ > You don't change the "configuration" or "client_id" values at this point in the tutorial. You'll still use the sample Azure tenant you used in the [Get started](get-started-verifiable-credentials.md) tutorial. You'll use Azure AD in the next tutorial.
-3. Open the MyFirstVC-display.json file in your code editor.
+1. Open the **MyFirstVC-display.json** file in your code editor.
```json {
In this section, we use the rules and display files from the [Sample issuer app]
} ```
- Let's make a few modifications so this verifiable credential looks visibly different from sample code's version.
+ Make modifications so this verifiable credential looks visibly different from the sample code's version.
```json "card": {
In this section, we use the rules and display files from the [Sample issuer app]
``` >[!NOTE]
- > To ensure that your credential is readable and accessible, we strongly recommend that you select text and background colors with a [contrast ratio](https://www.w3.org/WAI/WCAG21/Techniques/general/G18) of at least 4.5:1.
+ > To ensure that your credential is readable and accessible, select text and background colors with a [contrast ratio](https://www.w3.org/WAI/WCAG21/Techniques/general/G18) of at least 4.5:1.
Save these changes. ## Create a storage account
-Before creating our first verifiable credential, we need to create a Blob Storage container that can hold our configuration and rules files.
+Before you create your first verifiable credential, create an Azure Blob Storage container that can hold your configuration and rules files.
-1. Create a storage account using the options shown below. For detailed steps review the [Create a storage account](../../storage/common/storage-account-create.md?tabs=azure-portal) article.
+1. Create a storage account by using the options shown. For detailed steps, see [Create a storage account](../../storage/common/storage-account-create.md?tabs=azure-portal).
- - **Subscription:** Choose the subscription that we are using for these tutorials.
- - **Resource group:** Choose the same resource group we used in earlier tutorials (**vc-resource-group**).
- - **Name:** A unique name.
- - **Location:** (US) EAST US.
- - **Performance:** Standard.
- - **Account kind:** Storage V2.
- - **Replication:** Locally redundant.
+ - **Subscription**: The subscription that you're using for these tutorials
+ - **Resource group**: The same resource group you used in earlier tutorials (**vc-resource-group**)
+ - **Name**: A unique name
+ - **Location**: (US) EAST US
+ - **Performance**: Standard
+ - **Account kind**: Storage V2
+ - **Replication**: Locally redundant
- ![Create a new storage account](media/enable-your-tenant-verifiable-credentials/create-storage-account.png)
+ ![Screenshot that shows the Create storage account screen.](media/enable-your-tenant-verifiable-credentials/create-storage-account.png)
-2. After creating the storage account, we need to create a container. Select **Containers** under **Blob Storage** and create a container using the values provided below:
+1. After you create the storage account, create a container. Under **Blob Storage**, select **Containers**. Create a container by using these values:
- - **Name:** vc-container
- - **Public access level:** Private (no anonymous access)
+ - **Name**: vc-container
+ - **Public access level**: Private (no anonymous access)
+
+ Select **Create**.
- ![Create a container](media/enable-your-tenant-verifiable-credentials/new-container.png)
+ ![Screenshot that shows creating a container.](media/enable-your-tenant-verifiable-credentials/new-container.png)
-3. Now select your new container and upload both the new rules and display files **MyFirstVC-display.json** and **MyFirstVC-rules.json** we created earlier.
+1. Now select your new container and upload both the new rules and display files **MyFirstVC-display.json** and **MyFirstVC-rules.json** you created earlier.
- ![upload rules file](media/enable-your-tenant-verifiable-credentials/blob-storage-upload-rules-display-files.png)
+ ![Screenshot that shows uploading rules files.](media/enable-your-tenant-verifiable-credentials/blob-storage-upload-rules-display-files.png)
-## Assign blob role
+## Assign a blob role
-Before creating the credential, we need to first give the signed in user the correct role assignment so they can access the files in Storage Blob.
+Before you create the credential, you need to first give the signed-in user the correct role assignment so they can access the files in Storage Blob.
-1. Navigate to **Storage** > **Container**.
+1. Go to **Storage** > **Container**.
1. Select **Access control (IAM)**.
-1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). | Setting | Value | | | | | Role | Storage Blob Data Reader |
- | Assign access to | User, group, or service principle |
- | Select | &lt;account that you are using to perform these steps&gt; |
+ | Assign access to | User, group, or service principal |
+ | Select | &lt;account that you're using to perform these steps&gt; |
- ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows the Add role assignment page in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
>[!IMPORTANT]
- >By default, container creators get the **Owner** role assigned. The **Owner** role is not enough on its own. Your account needs the **Storage Blob Data Reader** role. For more information review [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/common/storage-auth-aad-rbac-portal.md)
+ >By default, container creators get the Owner role assigned. The Owner role isn't enough on its own. Your account needs the Storage Blob Data Reader role. For more information, see [Use the Azure portal to assign an Azure role for access to blob and queue data](../../storage/common/storage-auth-aad-rbac-portal.md).
-## Set up verifiable credentials (Preview)
+## Set up Verifiable Credentials Preview
-Now we need to take the last step to set up your tenant for verifiable credentials.
+Now you take the last step to set up your tenant for verifiable credentials.
-1. From the Azure portal, search for **verifiable credentials**.
-2. Choose **Verifiable Credentials (Preview)**.
-3. Choose **Get started**
-4. We need to set up your organization and provide the organization name, domain, and key vault. Let's look at each one of the values.
+1. In the Azure portal, search for **verifiable credentials**.
+1. Select **Verifiable Credentials (Preview)**.
+1. Select **Get started**.
+1. Set up your organization by providing the following information:
+
+ - **Organization name**: Enter a name to reference your business within Verifiable Credentials. This value isn't customer facing.
+ - **Domain:** Enter a domain that's added to a service endpoint in your DID document. [Microsoft Authenticator](../user-help/user-help-auth-app-download-install.md) and other wallets use this information to validate that your DID is [linked to your domain](how-to-dnsbind.md). If the wallet can verify the DID, it displays a verified symbol. If the wallet is unable to verify the DID, it informs the user that the credential was issued by an organization it couldn't validate. The domain is what binds your DID to something tangible that the user might know about your business.
+ - **Key vault:** Enter the name of the key vault that you created earlier.
- - **organization name**: This name is how you reference your business within the Verifiable Credential service. This value is not customer facing.
- - **Domain:** The domain entered is added to a service endpoint in your DID document. [Microsoft Authenticator](../user-help/user-help-auth-app-download-install.md) and other wallets use this information to validate that your DID is [linked to your domain](how-to-dnsbind.md). If the wallet can verify the DID, it displays a verified symbol. If the wallet is unable to verify the DID, it informs the user that the credential was issued by an organization it could not validate. The domain is what binds your DID to something tangible that the user may know about your business.
- - **Key vault:** Provide the name of the Key Vault that we created earlier.
-
>[!IMPORTANT]
- > The domain can not be a redirect, otherwise the DID and domain cannot be linked. Make sure to use https://www.domain.com format.
-
-5. Choose **Save and create credential**
+ > The domain can't be a redirect. Otherwise, the DID and domain can't be linked. Make sure to use the https://www.domain.com format.
+
+1. Select **Save and create credential**.
- ![set up your organizational identity](media/enable-your-tenant-verifiable-credentials/save-create.png)
+ ![Screenshot that shows setting up your organizational identity.](media/enable-your-tenant-verifiable-credentials/save-create.png)
-Congratulations, your tenant is now enabled for the Verifiable Credential preview!
+Congratulations, your tenant is now enabled for the Verifiable Credentials preview!
-## Create your VC in the portal
+## Create your verifiable credential in the portal
-The previous step leaves you in the **Create a new credential** page.
+After the previous step, the **Create a new credential** screen appears.
- ![verifiable credentials get started](media/enable-your-tenant-verifiable-credentials/create-credential-after-enable-did.png)
+ ![Screenshot that shows the Getting started screen.](media/enable-your-tenant-verifiable-credentials/create-credential-after-enable-did.png)
-1. Under Credential Name, add the name **MyFirstVC**. This name is used in the portal to identify your verifiable credentials and it is included as part of the verifiable credentials contract.
-2. In the Display file section, choose **Configure display file**
-3. In the **Storage accounts** section, select **woodgrovevcstorage**.
-4. From the list of available containers choose **vc-container**.
-5. Choose the **MyFirstVC-display.json** file we created earlier.
-6. From the **Create a new credential** in the **Rules file** section choose **Configure rules file**
-7. In the **Storage accounts** section, select **woodgrovecstorage**
-8. Choose **vc-container**.
-9. Select **MyFirstVC-rules.json**
-10. From the **Create a new credential** screen choose **Create**.
+1. For the credential name, enter **MyFirstVC**. This name is used in the portal to identify your verifiable credentials. It's included as part of the verifiable credentials contract.
+1. In the **Display file** section, select **Configure display file**.
+1. In the **Storage accounts** section, select **woodgrovevcstorage**.
+1. From the list of available containers, select **vc-container**.
+1. Select the **MyFirstVC-display.json** file you created earlier.
+1. On the **Create a new credential** screen, in the **Rules file** section, select **Configure rules file**.
+1. In the **Storage accounts** section, select **woodgrovecstorage**.
+1. Select **vc-container**.
+1. Select **MyFirstVC-rules.json**.
+1. On the **Create a new credential** screen, select **Create**.
- ![create a new credential](media/enable-your-tenant-verifiable-credentials/create-my-first-vc.png)
+ ![Screenshot that shows creating a new credential.](media/enable-your-tenant-verifiable-credentials/create-my-first-vc.png)
### Credential URL Now that you have a new credential, copy the credential URL.
- ![The issue credential URL](media/enable-your-tenant-verifiable-credentials/issue-credential-url.png)
+ ![Screenshot that shows the credential URL.](media/enable-your-tenant-verifiable-credentials/issue-credential-url.png)
>[!NOTE]
->The credential URL is the combination of the rules and display files. It is the URL that Authenticator evaluates before displaying to the user verifiable credential issuance requirements.
+>The credential URL is the combination of the rules and display files. It's the URL that Authenticator evaluates before it displays to the user verifiable credential issuance requirements.
## Update the sample app
-Now we make modifications to the sample app's issuer code to update it with your verifiable credential URL. This allows you to issue verifiable credentials using your own tenant.
+Now you'll make modifications to the sample app's issuer code to update it with your verifiable credential URL. This step allows you to issue verifiable credentials by using your own tenant.
1. Go to the folder where you placed the sample app's files.
-2. Open the issuer folder and then open app.js in Visual Studio Code.
-3. Update the constant 'credential' with your new credential URL and set the credentialType constant to 'MyFirstVC' and save the file.
+1. Open the issuer folder, and then open app.js in Visual Studio Code.
+1. Update the constant **credential** with your new credential URL. Set the **credentialType** constant to **'MyFirstVC'**, and save the file.
- ![image of visual studio code showing the relevant areas highlighted](media/enable-your-tenant-verifiable-credentials/sample-app-vscode.png)
+ ![Screenshot that shows Visual Studio Code with the relevant areas highlighted.](media/enable-your-tenant-verifiable-credentials/sample-app-vscode.png)
-4. Open a command prompt and open the issuer folder.
-5. Run the updated node app.
+1. Open a command prompt, and open the issuer folder.
+1. Run the updated node app.
```terminal node app.js ```
-6. Using a different command prompt run ngrok to set up a URL on 8081. You can install ngrok globally using the [ngrok npm package](https://www.npmjs.com/package/ngrok/).
+1. Using a different command prompt, run ngrok to set up a URL on 8081. You can install ngrok globally by using the [ngrok npm package](https://www.npmjs.com/package/ngrok/).
```terminal ngrok http 8081 ```
-
- >[!IMPORTANT]
- > You may also notice a warning that this app or website may be risky. The message is expected at this time because we have not yet linked your DID to your domain. Follow the [DNS binding](how-to-dnsbind.md) instructions to configure this.
-
-7. Open the HTTPS URL generated by ngrok.
+ >[!IMPORTANT]
+ > You might see a warning that reads "This app or website may be risky." The message is expected at this time because you haven't linked your DID to your domain. For configuration instructions, see [DNS binding](how-to-dnsbind.md).
- ![NGROK forwarding endpoints](media/enable-your-tenant-verifiable-credentials/ngrok-url-screen.png)
+1. Open the HTTPS URL generated by ngrok.
-8. Choose **GET CREDENTIAL**
-9. In Authenticator scan the QR code.
-10. At **This app or website may be risky** warning message choose **Advanced**.
+ ![Screenshot that shows NGROK forwarding endpoints.](media/enable-your-tenant-verifiable-credentials/ngrok-url-screen.png)
- ![Initial warning](media/enable-your-tenant-verifiable-credentials/site-warning.png)
+1. Select **GET CREDENTIAL**.
+1. In Authenticator, scan the QR code.
+1. On the **This app or website may be risky** screen, select **Advanced**.
-11. At the risky website warning choose **Proceed anyways (unsafe)**
+ ![Screenshot that shows the initial warning.](media/enable-your-tenant-verifiable-credentials/site-warning.png)
- ![Second warning about the issuer](media/enable-your-tenant-verifiable-credentials/site-warning-proceed.png)
+1. On the next **This app or website may be risky** screen, select **Proceed anyways (unsafe)**.
+ ![Screenshot that shows the second warning about the issuer.](media/enable-your-tenant-verifiable-credentials/site-warning-proceed.png)
-12. At the **Add a credential** screen notice a few things:
- 1. At the top of the screen you can see a red **Not verified** message
- 1. The credential is customized based on the changes we made to the display file.
- 1. The **Sign in to your account** option is pointing to **didplayground.b2clogin.com**.
+1. On the **Add a credential** screen, notice that:
+ 1. At the top of the screen, you can see a red **Not verified** message.
+ 1. The credential is customized based on the changes you made to the display file.
+ 1. The **Sign in to your account** option points to didplayground.b2clogin.com.
- ![add credential screen with warning](media/enable-your-tenant-verifiable-credentials/add-credential-not-verified.png)
-
-13. Choose **Sign in to your account** and authenticate using the credential information you provided in the [get started tutorial](get-started-verifiable-credentials.md).
-14. After successfully authenticating the **Add** button is no longer greyed out. Choose **Add**.
+ ![Screenshot that shows the Add a credential screen with warning.](media/enable-your-tenant-verifiable-credentials/add-credential-not-verified.png)
- ![add credential screen after authenticating](media/enable-your-tenant-verifiable-credentials/add-credential-not-verified-authenticated.png)
+1. Select **Sign in to your account**, and authenticate by using the credential information you provided in the [Get started](get-started-verifiable-credentials.md) tutorial.
+1. After successful authentication, the **Add** button is now activated. Select **Add**.
-We have now issued a verifiable credential using our tenant to generate our vc while still using our B2C tenant for authentication.
+ ![Screenshot that shows the Add a credential screen after authenticating.](media/enable-your-tenant-verifiable-credentials/add-credential-not-verified-authenticated.png)
- ![vc issued by your azure AD and authenticated by our Azure B2C instance](media/enable-your-tenant-verifiable-credentials/my-vc-b2c.png)
+ You're now issued a verifiable credential by using our tenant to generate a verifiable credential while still using the sample Azure B2C tenant for authentication.
+ ![Screenshot that shows a verifiable credential issued by Azure AD and authenticated by the sample Azure B2C tenant.](media/enable-your-tenant-verifiable-credentials/my-vc-b2c.png)
-## Test verifying the VC using the sample app
+## Verify the verifiable credential by using the sample app
-Now that we've issued the verifiable credential from our own tenant, let's verify it using our sample app.
+Now that you're issued the verifiable credential from our tenant, verify it by using the sample app.
>[!IMPORTANT]
-> When testing, use the same email and password that you used during the [get started](get-started-verifiable-credentials.md) article. While your tenant is issuing the vc, the input is coming from an id token issued by the Microsoft B2C tenant. In tutorial two, we are switching token issuance to your tenant
+> When you test, use the same email and password that you used during the [Get started](get-started-verifiable-credentials.md) tutorial. While your tenant is issuing the verifiable credential, the input is coming from an ID token issued by the sample Azure B2C tenant. In the second tutorial, you'll switch token issuance to your tenant.
-1. Open up **Settings** in the verifiable credentials blade in the Azure portal. Copy the decentralized identifier (DID).
+1. In the Azure portal, on the **Verifiable credentials** pane, select **Settings**. Copy the DID.
- ![copy the DID](media/enable-your-tenant-verifiable-credentials/issuer-identifier.png)
+ ![Screenshot that shows copying the DID.](media/enable-your-tenant-verifiable-credentials/issuer-identifier.png)
-2. Now open verifier folder part of the sample app files. We need to update the app.js file in the verifier sample code and make the following changes:
+1. Now open the verifier folder part of the sample app files. You need to update the app.js file in the verifier sample code and make the following changes:
- - **credential**: change to your credential URL
- - **credentialType**: 'MyFirstVC'
- - **issuerDid**: Copy this value from Azure portal>Verifiable credentials (Preview)>Settings>Decentralized identifier (DID)
+ - **credential**: Change to your credential URL.
+ - **credentialType**: Enter **'MyFirstVC'**.
+ - **issuerDid**: Copy this value from the Azure portal > **Verifiable Credentials (Preview)** > **Settings** > **Decentralized identifier (DID)**.
- ![update the constant issuerDid to match your issuer identifier](media/enable-your-tenant-verifiable-credentials/editing-verifier.png)
+ ![Screenshot that shows updating the constant issuer DID to match your issuer identifier.](media/enable-your-tenant-verifiable-credentials/editing-verifier.png)
-3. Stop running your issuer ngrok service.
+1. Stop running your issuer ngrok service.
```cmd control-c ```
-4. Now run ngrok with the verifier port 8082.
+1. Now run ngrok with the verifier port 8082.
```cmd ngrok http 8082 ```
-5. In another terminal window, navigate to the verifier app and run it similarly to how we ran the issuer app.
+1. In another terminal window, go to the verifier app and run it similarly to how you ran the issuer app.
```cmd cd ..
Now that we've issued the verifiable credential from our own tenant, let's verif
node app.js ```
-6. Open the ngrok url in your browser and scan the QR code using Authenticator in your mobile device.
-7. On your mobile device, choose **Allow** at the **New permission request** screen.
+1. Open the ngrok URL in your browser, and scan the QR code by using Authenticator in your mobile device.
+1. On your mobile device, on the **New permission request** screen, select **Allow**.
>[!NOTE]
- > The DID signing this VC is still the Microsoft B2C. The Verifier DID is still from the Microsoft Sample App tenant. Since Microsoft's DID has been linked to a domain we own, you do not see the warning like we experienced during the issuance flow. This will be updated in the next section.
+ > The DID signing this verifiable credential is still the Azure B2C. The verifier DID is still from the Microsoft sample app tenant. Because the Microsoft DID is linked to a domain we own, you don't see the warning like you experienced during the issuance flow. This situation is updated in the next section.
- ![new permission request](media/enable-your-tenant-verifiable-credentials/new-permission-request.png)
+ ![Screenshot that shows the New permission request screen.](media/enable-your-tenant-verifiable-credentials/new-permission-request.png)
-8. You have no successfully verified your credential.
+1. You've now successfully verified your credential.
## Next steps
-Now that you have the sample code issuing a VC from your issuer, lets continue to the next section where you use your own identity provider to authenticate users trying to get verifiable credential and use your DID to sign presentation requests.
+Now that you have the sample code that issues a verifiable credential from your issuer, continue to the next section. You'll use your own identity provider to authenticate users who are trying to get verifiable credentials. You'll also use your DID to sign presentation requests.
> [!div class="nextstepaction"]
-> [Tutorial - Issue and verify verifiable credentials using your tenant](issue-verify-verifiable-credentials-your-tenant.md)
+> [Tutorial - Issue and verify verifiable credentials by using your tenant](issue-verify-verifiable-credentials-your-tenant.md)
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/faq.md
AKS builds upon a number of Azure infrastructure resources, including virtual ma
To enable this architecture, each AKS deployment spans two resource groups: 1. You create the first resource group. This group contains only the Kubernetes service resource. The AKS resource provider automatically creates the second resource group during deployment. An example of the second resource group is *MC_myResourceGroup_myAKSCluster_eastus*. For information on how to specify the name of this second resource group, see the next section.
-1. The second resource group, known as the *node resource group*, contains all of the infrastructure resources associated with the cluster. These resources include the Kubernetes node VMs, virtual networking, and storage. By default, the node resource group has a name like *MC_myResourceGroup_myAKSCluster_eastus*. AKS automatically deletes the node resource whenever the cluster is deleted, so it should only be used for resources that share the cluster's lifecycle.
+1. The second resource group, known as the *node resource group*, contains all of the infrastructure resources associated with the cluster. These resources include the Kubernetes node VMs, virtual networking, and storage. By default, the node resource group has a name like *MC_myResourceGroup_myAKSCluster_eastus*. AKS automatically deletes the node resource group whenever the cluster is deleted, so it should only be used for resources that share the cluster's lifecycle.
## Can I provide my own name for the AKS node resource group?
app-service App Service Authentication How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-authentication-how-to.md
- Title: Advanced usage of AuthN/AuthZ
-description: Learn to customize the authentication and authorization feature in App Service for different scenarios, and get user claims and different tokens.
- Previously updated : 03/29/2021---
-# Advanced usage of authentication and authorization in Azure App Service
-
-This article shows you how to customize the built-in [authentication and authorization in App Service](overview-authentication-authorization.md), and to manage identity from your application.
-
-To get started quickly, see one of the following tutorials:
-
-* [Tutorial: Authenticate and authorize users end-to-end in Azure App Service](tutorial-auth-aad.md)
-* [How to configure your app to use Microsoft Identity Platform login](configure-authentication-provider-aad.md)
-* [How to configure your app to use Facebook login](configure-authentication-provider-facebook.md)
-* [How to configure your app to use Google login](configure-authentication-provider-google.md)
-* [How to configure your app to use Twitter login](configure-authentication-provider-twitter.md)
-* [How to configure your app to login using an OpenID Connect provider (Preview)](configure-authentication-provider-openid-connect.md)
-* [How to configure your app to login using an Sign in with Apple (Preview)](configure-authentication-provider-apple.md)
-
-## Use multiple sign-in providers
-
-The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and Twitter). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows:
-
-First, in the **Authentication / Authorization** page in the Azure portal, configure each of the identity provider you want to enable.
-
-In **Action to take when request is not authenticated**, select **Allow Anonymous requests (no action)**.
-
-In the sign-in page, or the navigation bar, or any other location of your app, add a sign-in link to each of the providers you enabled (`/.auth/login/<provider>`). For example:
-
-```html
-<a href="/.auth/login/aad">Log in with the Microsoft Identity Platform</a>
-<a href="/.auth/login/facebook">Log in with Facebook</a>
-<a href="/.auth/login/google">Log in with Google</a>
-<a href="/.auth/login/twitter">Log in with Twitter</a>
-<a href="/.auth/login/apple">Log in with Apple</a>
-```
-
-When the user clicks on one of the links, the respective sign-in page opens to sign in the user.
-
-To redirect the user post-sign-in to a custom URL, use the `post_login_redirect_url` query string parameter (not to be confused with the Redirect URI in your identity provider configuration). For example, to navigate the user to `/Home/Index` after sign-in, use the following HTML code:
-
-```html
-<a href="/.auth/login/<provider>?post_login_redirect_url=/Home/Index">Log in</a>
-```
-
-## Validate tokens from providers
-
-In a client-directed sign-in, the application signs in the user to the provider manually and then submits the authentication token to App Service for validation (see [Authentication flow](overview-authentication-authorization.md#authentication-flow)). This validation itself doesn't actually grant you access to the desired app resources, but a successful validation will give you a session token that you can use to access app resources.
-
-To validate the provider token, App Service app must first be configured with the desired provider. At runtime, after you retrieve the authentication token from your provider, post the token to `/.auth/login/<provider>` for validation. For example:
-
-```
-POST https://<appname>.azurewebsites.net/.auth/login/aad HTTP/1.1
-Content-Type: application/json
-
-{"id_token":"<token>","access_token":"<token>"}
-```
-
-The token format varies slightly according to the provider. See the following table for details:
-
-| Provider value | Required in request body | Comments |
-|-|-|-|
-| `aad` | `{"access_token":"<access_token>"}` | |
-| `microsoftaccount` | `{"access_token":"<token>"}` | The `expires_in` property is optional. <br/>When requesting the token from Live services, always request the `wl.basic` scope. |
-| `google` | `{"id_token":"<id_token>"}` | The `authorization_code` property is optional. When specified, it can also optionally be accompanied by the `redirect_uri` property. |
-| `facebook`| `{"access_token":"<user_access_token>"}` | Use a valid [user access token](https://developers.facebook.com/docs/facebook-login/access-tokens) from Facebook. |
-| `twitter` | `{"access_token":"<access_token>", "access_token_secret":"<acces_token_secret>"}` | |
-| | | |
-
-If the provider token is validated successfully, the API returns with an `authenticationToken` in the response body, which is your session token.
-
-```json
-{
- "authenticationToken": "...",
- "user": {
- "userId": "sid:..."
- }
-}
-```
-
-Once you have this session token, you can access protected app resources by adding the `X-ZUMO-AUTH` header to your HTTP requests. For example:
-
-```
-GET https://<appname>.azurewebsites.net/api/products/1
-X-ZUMO-AUTH: <authenticationToken_value>
-```
-
-## Sign out of a session
-
-Users can initiate a sign-out by sending a `GET` request to the app's `/.auth/logout` endpoint. The `GET` request does the following:
--- Clears authentication cookies from the current session.-- Deletes the current user's tokens from the token store.-- For Azure Active Directory and Google, performs a server-side sign-out on the identity provider.-
-Here's a simple sign-out link in a webpage:
-
-```html
-<a href="/.auth/logout">Sign out</a>
-```
-
-By default, a successful sign-out redirects the client to the URL `/.auth/logout/done`. You can change the post-sign-out redirect page by adding the `post_logout_redirect_uri` query parameter. For example:
-
-```
-GET /.auth/logout?post_logout_redirect_uri=/https://docsupdatetracker.net/index.html
-```
-
-It's recommended that you [encode](https://wikipedia.org/wiki/Percent-encoding) the value of `post_logout_redirect_uri`.
-
-When using fully qualified URLs, the URL must be either hosted in the same domain or configured as an allowed external redirect URL for your app. In the following example, to redirect to `https://myexternalurl.com` that's not hosted in the same domain:
-
-```
-GET /.auth/logout?post_logout_redirect_uri=https%3A%2F%2Fmyexternalurl.com
-```
-
-Run the following command in the [Azure Cloud Shell](../cloud-shell/quickstart.md):
-
-```azurecli-interactive
-az webapp auth update --name <app_name> --resource-group <group_name> --allowed-external-redirect-urls "https://myexternalurl.com"
-```
-
-## Preserve URL fragments
-
-After users sign in to your app, they usually want to be redirected to the same section of the same page, such as `/wiki/Main_Page#SectionZ`. However, because [URL fragments](https://wikipedia.org/wiki/Fragment_identifier) (for example, `#SectionZ`) are never sent to the server, they are not preserved by default after the OAuth sign-in completes and redirects back to your app. Users then get a suboptimal experience when they need to navigate to the desired anchor again. This limitation applies to all server-side authentication solutions.
-
-In App Service authentication, you can preserve URL fragments across the OAuth sign-in. To do this, set an app setting called `WEBSITE_AUTH_PRESERVE_URL_FRAGMENT` to `true`. You can do it in the [Azure portal](https://portal.azure.com), or simply run the following command in the [Azure Cloud Shell](../cloud-shell/quickstart.md):
-
-```azurecli-interactive
-az webapp config appsettings set --name <app_name> --resource-group <group_name> --settings WEBSITE_AUTH_PRESERVE_URL_FRAGMENT="true"
-```
-
-## Access user claims
-
-App Service passes user claims to your application by using special headers. External requests aren't allowed to set these headers, so they are present only if set by App Service. Some example headers include:
-
-* X-MS-CLIENT-PRINCIPAL-NAME
-* X-MS-CLIENT-PRINCIPAL-ID
-
-Code that is written in any language or framework can get the information that it needs from these headers. For ASP.NET 4.6 apps, the **ClaimsPrincipal** is automatically set with the appropriate values. ASP.NET Core, however, doesn't provide an authentication middleware that integrates with App Service user claims. For a workaround, see [MaximeRouiller.Azure.AppService.EasyAuth](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth).
-
-If the [token store](overview-authentication-authorization.md#token-store) is enabled for your app, you can also obtain additional details on the authenticated user by calling `/.auth/me`. The Mobile Apps server SDKs provide helper methods to work with this data. For more information, see [How to use the Azure Mobile Apps Node.js SDK](/previous-versions/azure/app-service-mobile/app-service-mobile-node-backend-how-to-use-server-sdk#howto-tables-getidentity), and [Work with the .NET backend server SDK for Azure Mobile Apps](/previous-versions/azure/app-service-mobile/app-service-mobile-dotnet-backend-how-to-use-server-sdk#user-info).
-
-## Retrieve tokens in app code
-
-From your server code, the provider-specific tokens are injected into the request header, so you can easily access them. The following table shows possible token header names:
-
-| Provider | Header names |
-|-|-|
-| Azure Active Directory | `X-MS-TOKEN-AAD-ID-TOKEN` <br/> `X-MS-TOKEN-AAD-ACCESS-TOKEN` <br/> `X-MS-TOKEN-AAD-EXPIRES-ON` <br/> `X-MS-TOKEN-AAD-REFRESH-TOKEN` |
-| Facebook Token | `X-MS-TOKEN-FACEBOOK-ACCESS-TOKEN` <br/> `X-MS-TOKEN-FACEBOOK-EXPIRES-ON` |
-| Google | `X-MS-TOKEN-GOOGLE-ID-TOKEN` <br/> `X-MS-TOKEN-GOOGLE-ACCESS-TOKEN` <br/> `X-MS-TOKEN-GOOGLE-EXPIRES-ON` <br/> `X-MS-TOKEN-GOOGLE-REFRESH-TOKEN` |
-| Twitter | `X-MS-TOKEN-TWITTER-ACCESS-TOKEN` <br/> `X-MS-TOKEN-TWITTER-ACCESS-TOKEN-SECRET` |
-|||
-
-From your client code (such as a mobile app or in-browser JavaScript), send an HTTP `GET` request to `/.auth/me` ([token store](overview-authentication-authorization.md#token-store) must be enabled). The returned JSON has the provider-specific tokens.
-
-> [!NOTE]
-> Access tokens are for accessing provider resources, so they are present only if you configure your provider with a client secret. To see how to get refresh tokens, see Refresh access tokens.
-
-## Refresh identity provider tokens
-
-When your provider's access token (not the [session token](#extend-session-token-expiration-grace-period)) expires, you need to reauthenticate the user before you use that token again. You can avoid token expiration by making a `GET` call to the `/.auth/refresh` endpoint of your application. When called, App Service automatically refreshes the access tokens in the [token store](overview-authentication-authorization.md#token-store) for the authenticated user. Subsequent requests for tokens by your app code get the refreshed tokens. However, for token refresh to work, the token store must contain [refresh tokens](https://auth0.com/learn/refresh-tokens/) for your provider. The way to get refresh tokens are documented by each provider, but the following list is a brief summary:
--- **Google**: Append an `access_type=offline` query string parameter to your `/.auth/login/google` API call. If using the Mobile Apps SDK, you can add the parameter to one of the `LogicAsync` overloads (see [Google Refresh Tokens](https://developers.google.com/identity/protocols/OpenIDConnect#refresh-tokens)).-- **Facebook**: Doesn't provide refresh tokens. Long-lived tokens expire in 60 days (see [Facebook Expiration and Extension of Access Tokens](https://developers.facebook.com/docs/facebook-login/access-tokens/expiration-and-extension)).-- **Twitter**: Access tokens don't expire (see [Twitter OAuth FAQ](https://developer.twitter.com/en/docs/authentication/faq)).-- **Azure Active Directory**: In [https://resources.azure.com](https://resources.azure.com), do the following steps:
- 1. At the top of the page, select **Read/Write**.
- 2. In the left browser, navigate to **subscriptions** > **_\<subscription\_name_** > **resourceGroups** > **_\<resource\_group\_name>_** > **providers** > **Microsoft.Web** > **sites** > **_\<app\_name>_** > **config** > **authsettings**.
- 3. Click **Edit**.
- 4. Modify the following property. Replace _\<app\_id>_ with the Azure Active Directory application ID of the service you want to access.
-
- ```json
- "additionalLoginParams": ["response_type=code id_token", "resource=<app_id>"]
- ```
-
- 5. Click **Put**.
-
-Once your provider is configured, you can [find the refresh token and the expiration time for the access token](#retrieve-tokens-in-app-code) in the token store.
-
-To refresh your access token at any time, just call `/.auth/refresh` in any language. The following snippet uses jQuery to refresh your access tokens from a JavaScript client.
-
-```javascript
-function refreshTokens() {
- let refreshUrl = "/.auth/refresh";
- $.ajax(refreshUrl) .done(function() {
- console.log("Token refresh completed successfully.");
- }) .fail(function() {
- console.log("Token refresh failed. See application logs for details.");
- });
-}
-```
-
-If a user revokes the permissions granted to your app, your call to `/.auth/me` may fail with a `403 Forbidden` response. To diagnose errors, check your application logs for details.
-
-## Extend session token expiration grace period
-
-The authenticated session expires after 8 hours. After an authenticated session expires, there is a 72-hour grace period by default. Within this grace period, you're allowed to refresh the session token with App Service without reauthenticating the user. You can just call `/.auth/refresh` when your session token becomes invalid, and you don't need to track token expiration yourself. Once the 72-hour grace period is lapses, the user must sign in again to get a valid session token.
-
-If 72 hours isn't enough time for you, you can extend this expiration window. Extending the expiration over a long period could have significant security implications (such as when an authentication token is leaked or stolen). So you should leave it at the default 72 hours or set the extension period to the smallest value.
-
-To extend the default expiration window, run the following command in the [Cloud Shell](../cloud-shell/overview.md).
-
-```azurecli-interactive
-az webapp auth update --resource-group <group_name> --name <app_name> --token-refresh-extension-hours <hours>
-```
-
-> [!NOTE]
-> The grace period only applies to the App Service authenticated session, not the tokens from the identity providers. There is no grace period for the expired provider tokens.
->
-
-## Limit the domain of sign-in accounts
-
-Both Microsoft Account and Azure Active Directory lets you sign in from multiple domains. For example, Microsoft Account allows _outlook.com_, _live.com_, and _hotmail.com_ accounts. Azure AD allows any number of custom domains for the sign-in accounts. However, you may want to accelerate your users straight to your own branded Azure AD sign-in page (such as `contoso.com`). To suggest the domain name of the sign-in accounts, follow these steps.
-
-In [https://resources.azure.com](https://resources.azure.com), navigate to **subscriptions** > **_\<subscription\_name_** > **resourceGroups** > **_\<resource\_group\_name>_** > **providers** > **Microsoft.Web** > **sites** > **_\<app\_name>_** > **config** > **authsettings**.
-
-Click **Edit**, modify the following property, and then click **Put**. Be sure to replace _\<domain\_name>_ with the domain you want.
-
-```json
-"additionalLoginParams": ["domain_hint=<domain_name>"]
-```
-
-This setting appends the `domain_hint` query string parameter to the login redirect URL.
-
-> [!IMPORTANT]
-> It's possible for the client to remove the `domain_hint` parameter after receiving the redirect URL, and then login with a different domain. So while this function is convenient, it's not a security feature.
->
-
-## Authorize or deny users
-
-While App Service takes care of the simplest authorization case (i.e. reject unauthenticated requests), your app may require more fine-grained authorization behavior, such as limiting access to only a specific group of users. In certain cases, you need to write custom application code to allow or deny access to the signed-in user. In other cases, App Service or your identity provider may be able to help without requiring code changes.
--- [Server level](#server-level-windows-apps-only)-- [Identity provider level](#identity-provider-level)-- [Application level](#application-level)-
-### Server level (Windows apps only)
-
-For any Windows app, you can define authorization behavior of the IIS web server, by editing the *Web.config* file. Linux apps don't use IIS and can't be configured through *Web.config*.
-
-1. Navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole`
-
-1. In the browser explorer of your App Service files, navigate to *site/wwwroot*. If a *Web.config* doesn't exist, create it by selecting **+** > **New File**.
-
-1. Select the pencil for *Web.config* to edit it. Add the following configuration code and click **Save**. If *Web.config* already exists, just add the `<authorization>` element with everything in it. Add the accounts you want to allow in the `<allow>` element.
-
- ```xml
- <?xml version="1.0" encoding="utf-8"?>
- <configuration>
- <system.web>
- <authorization>
- <allow users="user1@contoso.com,user2@contoso.com"/>
- <deny users="*"/>
- </authorization>
- </system.web>
- </configuration>
- ```
-
-### Identity provider level
-
-The identity provider may provide certain turn-key authorization. For example:
--- For [Azure App Service](configure-authentication-provider-aad.md), you can [manage enterprise-level access](../active-directory/manage-apps/what-is-access-management.md) directly in Azure AD. For instructions, see [How to remove a user's access to an application](../active-directory/manage-apps/methods-for-removing-user-access.md).-- For [Google](configure-authentication-provider-google.md), Google API projects that belong to an [organization](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#organizations) can be configured to allow access only to users in your organization (see [Google's **Setting up OAuth 2.0** support page](https://support.google.com/cloud/answer/6158849?hl=en)).-
-### Application level
-
-If either of the other levels don't provide the authorization you need, or if your platform or identity provider isn't supported, you must write custom code to authorize users based on the [user claims](#access-user-claims).
-
-## Updating the configuration version
-
-There are two versions of the management API for the Authentication / Authorization feature. The V2 version is required for the "Authentication" experience in the Azure portal. An app already using the V1 API can upgrade to the V2 version once a few changes have been made. Specifically, secret configuration must be moved to slot-sticky application settings. This can be done automatically from the "Authentication" section of the portal for your app.
-
-> [!WARNING]
-> Migration to V2 will disable management of the App Service Authentication / Authorization feature for your application through some clients, such as its existing experience in the Azure portal, Azure CLI, and Azure PowerShell. This cannot be reversed.
-
-The V2 API does not support creation or editing of Microsoft Account as a distinct provider as was done in V1. Rather, it leverages the converged [Microsoft Identity Platform](../active-directory/develop/v2-overview.md) to sign-in users with both Azure AD and personal Microsoft accounts. When switching to the V2 API, the V1 Azure Active Directory configuration is used to configure the Microsoft Identity Platform provider. The V1 Microsoft Account provider will be carried forward in the migration process and continue to operate as normal, but it is recommended that you move to the newer Microsoft Identity Platform model. See [Support for Microsoft Account provider registrations](#support-for-microsoft-account-provider-registrations) to learn more.
-
-The automated migration process will move provider secrets into application settings and then convert the rest of the configuration into the new format. To use the automatic migration:
-
-1. Navigate to your app in the portal and select the **Authentication** menu option.
-1. If the app is configured using the V1 model, you will see an **Upgrade** button.
-1. Review the description in the confirmation prompt. If you are ready to perform the migration, click **Upgrade** in the prompt.
-
-### Manually managing the migration
-
-The following steps will allow you to manually migrate the application to the V2 API if you do not wish to use the automatic version mentioned above.
-
-#### Moving secrets to application settings
-
-1. Get your existing configuration by using the V1 API:
-
- ```azurecli
- # For Web Apps
- az webapp auth show -g <group_name> -n <site_name>
- ```
-
- In the resulting JSON payload, make note of the secret value used for each provider you have configured:
-
- * AAD: `clientSecret`
- * Google: `googleClientSecret`
- * Facebook: `facebookAppSecret`
- * Twitter: `twitterConsumerSecret`
- * Microsoft Account: `microsoftAccountClientSecret`
-
- > [!IMPORTANT]
- > The secret values are important security credentials and should be handled carefully. Do not share these values or persist them on a local machine.
-
-1. Create slot-sticky application settings for each secret value. You may choose the name of each application setting. It's value should match what you obtained in the previous step or [reference a Key Vault secret](./app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json) that you have created with that value.
-
- To create the setting, you can use the Azure portal or run a variation of the following for each provider:
-
- ```azurecli
- # For Web Apps, Google example
- az webapp config appsettings set -g <group_name> -n <site_name> --slot-settings GOOGLE_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step>
-
- # For Azure Functions, Twitter example
- az functionapp config appsettings set -g <group_name> -n <site_name> --slot-settings TWITTER_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step>
- ```
-
- > [!NOTE]
- > The application settings for this configuration should be marked as slot-sticky, meaning that they will not move between environments during a [slot swap operation](./deploy-staging-slots.md). This is because your authentication configuration itself is tied to the environment.
-
-1. Create a new JSON file named `authsettings.json`.Take the output that you received previously and remove each secret value from it. Write the remaining output to the file, making sure that no secret is included. In some cases, the configuration may have arrays containing empty strings. Make sure that `microsoftAccountOAuthScopes` does not, and if it does, switch that value to `null`.
-
-1. Add a property to `authsettings.json` which points to the application setting name you created earlier for each provider:
-
- * AAD: `clientSecretSettingName`
- * Google: `googleClientSecretSettingName`
- * Facebook: `facebookAppSecretSettingName`
- * Twitter: `twitterConsumerSecretSettingName`
- * Microsoft Account: `microsoftAccountClientSecretSettingName`
-
- An example file after this operation might look similar to the following, in this case only configured for AAD:
-
- ```json
- {
- "id": "/subscriptions/00d563f8-5b89-4c6a-bcec-c1b9f6d607e0/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/mywebapp/config/authsettings",
- "name": "authsettings",
- "type": "Microsoft.Web/sites/config",
- "location": "Central US",
- "properties": {
- "enabled": true,
- "runtimeVersion": "~1",
- "unauthenticatedClientAction": "AllowAnonymous",
- "tokenStoreEnabled": true,
- "allowedExternalRedirectUrls": null,
- "defaultProvider": "AzureActiveDirectory",
- "clientId": "3197c8ed-2470-480a-8fae-58c25558ac9b",
- "clientSecret": "",
- "clientSecretSettingName": "MICROSOFT_IDENTITY_AUTHENTICATION_SECRET",
- "clientSecretCertificateThumbprint": null,
- "issuer": "https://sts.windows.net/0b2ef922-672a-4707-9643-9a5726eec524/",
- "allowedAudiences": [
- "https://mywebapp.azurewebsites.net"
- ],
- "additionalLoginParams": null,
- "isAadAutoProvisioned": true,
- "aadClaimsAuthorization": null,
- "googleClientId": null,
- "googleClientSecret": null,
- "googleClientSecretSettingName": null,
- "googleOAuthScopes": null,
- "facebookAppId": null,
- "facebookAppSecret": null,
- "facebookAppSecretSettingName": null,
- "facebookOAuthScopes": null,
- "gitHubClientId": null,
- "gitHubClientSecret": null,
- "gitHubClientSecretSettingName": null,
- "gitHubOAuthScopes": null,
- "twitterConsumerKey": null,
- "twitterConsumerSecret": null,
- "twitterConsumerSecretSettingName": null,
- "microsoftAccountClientId": null,
- "microsoftAccountClientSecret": null,
- "microsoftAccountClientSecretSettingName": null,
- "microsoftAccountOAuthScopes": null,
- "isAuthFromFile": "false"
- }
- }
- ```
-
-1. Submit this file as the new Authentication/Authorization configuration for your app:
-
- ```azurecli
- az rest --method PUT --url "/subscriptions/<subscription_id>/resourceGroups/<group_name>/providers/Microsoft.Web/sites/<site_name>/config/authsettings?api-version=2020-06-01" --body @./authsettings.json
- ```
-
-1. Validate that your app is still operating as expected after this gesture.
-
-1. Delete the file used in the previous steps.
-
-You have now migrated the app to store identity provider secrets as application settings.
-
-#### Support for Microsoft Account provider registrations
-
-If your existing configuration contains a Microsoft Account provider and does not contain an Azure Active Directory provider, you can switch the configuration over to the Azure Active Directory provider and then perform the migration. To do this:
-
-1. Go to [**App registrations**](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) in the Azure portal and find the registration associated with your Microsoft Account provider. It may be under the "Applications from personal account" heading.
-1. Navigate to the "Authentication" page for the registration. Under "Redirect URIs" you should see an entry ending in `/.auth/login/microsoftaccount/callback`. Copy this URI.
-1. Add a new URI that matches the one you just copied, except instead have it end in `/.auth/login/aad/callback`. This will allow the registration to be used by the App Service Authentication / Authorization configuration.
-1. Navigate to the App Service Authentication / Authorization configuration for your app.
-1. Collect the configuration for the Microsoft Account provider.
-1. Configure the Azure Active Directory provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
-1. Once you have saved the configuration, test the login flow by navigating in your browser to the `/.auth/login/aad` endpoint on your site and complete the sign-in flow.
-1. At this point, you have successfully copied the configuration over, but the existing Microsoft Account provider configuration remains. Before you remove it, make sure that all parts of your app reference the Azure Active Directory provider through login links, etc. Verify that all parts of your app work as expected.
-1. Once you have validated that things work against the AAD Azure Active Directory provider, you may remove the Microsoft Account provider configuration.
-
-> [!WARNING]
-> It is possible to converge the two registrations by modifying the [supported account types](../active-directory/develop/supported-accounts-validation.md) for the AAD app registration. However, this would force a new consent prompt for Microsoft Account users, and those users' identity claims may be different in structure, `sub` notably changing values since a new App ID is being used. This approach is not recommended unless thoroughly understood. You should instead wait for support for the two registrations in the V2 API surface.
-
-#### Switching to V2
-
-Once the above steps have been performed, navigate to the app in the Azure portal. Select the "Authentication (preview)" section.
-
-Alternatively, you may make a PUT request against the `config/authsettingsv2` resource under the site resource. The schema for the payload is the same as captured in the [Configure using a file](#config-file) section.
-
-## <a name="config-file"> </a>Configure using a file (preview)
-
-Your auth settings can optionally be configured via a file that is provided by your deployment. This may be required by certain preview capabilities of App Service Authentication / Authorization.
-
-> [!IMPORTANT]
-> Remember that your app payload, and therefore this file, may move between environments, as with [slots](./deploy-staging-slots.md). It is likely you would want a different app registration pinned to each slot, and in these cases, you should continue to use the standard configuration method instead of using the configuration file.
-
-### Enabling file-based configuration
-
-> [!CAUTION]
-> During preview, enabling file-based configuration will disable management of the App Service Authentication / Authorization feature for your application through some clients, such as the Azure portal, Azure CLI, and Azure PowerShell.
-
-1. Create a new JSON file for your configuration at the root of your project (deployed to D:\home\site\wwwroot in your web / function app). Fill in your desired configuration according to the [file-based configuration reference](#configuration-file-reference). If modifying an existing Azure Resource Manager configuration, make sure to translate the properties captured in the `authsettings` collection into your configuration file.
-
-2. Modify the existing configuration, which is captured in the [Azure Resource Manager](../azure-resource-manager/management/overview.md) APIs under `Microsoft.Web/sites/<siteName>/config/authsettings`. To modify this, you can use an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) or a tool like [Azure Resource Explorer](https://resources.azure.com/). Within the authsettings collection, you will need to set three properties (and may remove others):
-
- 1. Set `enabled` to "true"
- 2. Set `isAuthFromFile` to "true"
- 3. Set `authFilePath` to the name of the file (for example, "auth.json")
-
-> [!NOTE]
-> The format for `authFilePath` varies between platforms. On Windows, both relative and absolute paths are supported. Relative is recommended. For Linux, only absolute paths are supported currently, so the value of the setting should be "/home/site/wwwroot/auth.json" or similar.
-
-Once you have made this configuration update, the contents of the file will be used to define the behavior of App Service Authentication / Authorization for that site. If you ever wish to return to Azure Resource Manager configuration, you can do so by setting `isAuthFromFile` back to "false".
-
-### Configuration file reference
-
-Any secrets that will be referenced from your configuration file must be stored as [application settings](./configure-common.md#configure-app-settings). You may name the settings anything you wish. Just make sure that the references from the configuration file uses the same keys.
-
-The following exhausts possible configuration options within the file:
-
-```json
-{
- "platform": {
- "enabled": <true|false>
- },
- "globalValidation": {
- "unauthenticatedClientAction": "RedirectToLoginPage|AllowAnonymous|Return401|Return403",
- "redirectToProvider": "<default provider alias>",
- "excludedPaths": [
- "/path1",
- "/path2"
- ]
- },
- "httpSettings": {
- "requireHttps": <true|false>,
- "routes": {
- "apiPrefix": "<api prefix>"
- },
- "forwardProxy": {
- "convention": "NoProxy|Standard|Custom",
- "customHostHeaderName": "<host header value>",
- "customProtoHeaderName": "<proto header value>"
- }
- },
- "login": {
- "routes": {
- "logoutEndpoint": "<logout endpoint>"
- },
- "tokenStore": {
- "enabled": <true|false>,
- "tokenRefreshExtensionHours": "<double>",
- "fileSystem": {
- "directory": "<directory to store the tokens in if using a file system token store (default)>"
- },
- "azureBlobStorage": {
- "sasUrlSettingName": "<app setting name containing the sas url for the Azure Blob Storage if opting to use that for a token store>"
- }
- },
- "preserveUrlFragmentsForLogins": <true|false>,
- "allowedExternalRedirectUrls": [
- "https://uri1.azurewebsites.net/",
- "https://uri2.azurewebsites.net/",
- "url_scheme_of_your_app://easyauth.callback"
- ],
- "cookieExpiration": {
- "convention": "FixedTime|IdentityDerived",
- "timeToExpiration": "<timespan>"
- },
- "nonce": {
- "validateNonce": <true|false>,
- "nonceExpirationInterval": "<timespan>"
- }
- },
- "identityProviders": {
- "azureActiveDirectory": {
- "enabled": <true|false>,
- "registration": {
- "openIdIssuer": "<issuer url>",
- "clientId": "<app id>",
- "clientSecretSettingName": "APP_SETTING_CONTAINING_AAD_SECRET",
- },
- "login": {
- "loginParameters": [
- "paramName1=value1",
- "paramName2=value2"
- ]
- },
- "validation": {
- "allowedAudiences": [
- "audience1",
- "audience2"
- ]
- }
- },
- "facebook": {
- "enabled": <true|false>,
- "registration": {
- "appId": "<app id>",
- "appSecretSettingName": "APP_SETTING_CONTAINING_FACEBOOK_SECRET"
- },
- "graphApiVersion": "v3.3",
- "login": {
- "scopes": [
- "public_profile",
- "email"
- ]
- },
- },
- "gitHub": {
- "enabled": <true|false>,
- "registration": {
- "clientId": "<client id>",
- "clientSecretSettingName": "APP_SETTING_CONTAINING_GITHUB_SECRET"
- },
- "login": {
- "scopes": [
- "profile",
- "email"
- ]
- }
- },
- "google": {
- "enabled": true,
- "registration": {
- "clientId": "<client id>",
- "clientSecretSettingName": "APP_SETTING_CONTAINING_GOOGLE_SECRET"
- },
- "login": {
- "scopes": [
- "profile",
- "email"
- ]
- },
- "validation": {
- "allowedAudiences": [
- "audience1",
- "audience2"
- ]
- }
- },
- "twitter": {
- "enabled": <true|false>,
- "registration": {
- "consumerKey": "<consumer key>",
- "consumerSecretSettingName": "APP_SETTING_CONTAINING TWITTER_CONSUMER_SECRET"
- }
- },
- "apple": {
- "enabled": <true|false>,
- "registration": {
- "clientId": "<client id>",
- "clientSecretSettingName": "APP_SETTING_CONTAINING_APPLE_SECRET"
- },
- "login": {
- "scopes": [
- "profile",
- "email"
- ]
- }
- },
- "openIdConnectProviders": {
- "<providerName>": {
- "enabled": <true|false>,
- "registration": {
- "clientId": "<client id>",
- "clientCredential": {
- "clientSecretSettingName": "<name of app setting containing client secret>"
- },
- "openIdConnectConfiguration": {
- "authorizationEndpoint": "<url specifying authorization endpoint>",
- "tokenEndpoint": "<url specifying token endpoint>",
- "issuer": "<url specifying issuer>",
- "certificationUri": "<url specifying jwks endpoint>",
- "wellKnownOpenIdConfiguration": "<url specifying .well-known/open-id-configuration endpoint - if this property is set, the other properties of this object are ignored, and authorizationEndpoint, tokenEndpoint, issuer, and certificationUri are set to the corresponding values listed at this endpoint>"
- }
- },
- "login": {
- "nameClaimType": "<name of claim containing name>",
- "scopes": [
- "openid",
- "profile",
- "email"
- ],
- "loginParameterNames": [
- "paramName1=value1",
- "paramName2=value2"
- ],
- }
- },
- //...
- }
- }
-}
-```
-
-## Pin your app to a specific authentication runtime version
-
-When you enable Authentication / Authorization, platform middleware is injected into your HTTP request pipeline as described in the [feature overview](overview-authentication-authorization.md#how-it-works). This platform middleware is periodically updated with new features and improvements as part of routine platform updates. By default, your web or function app will run on the latest version of this platform middleware. These automatic updates are always backwards compatible. However, in the rare event that this automatic update introduces a runtime issue for your web or function app, you can temporarily roll back to the previous middleware version. This article explains how to temporarily pin an app to a specific version of the authentication middleware.
-
-### Automatic and manual version updates
-
-You can pin your app to a specific version of the platform middleware by setting a `runtimeVersion` setting for the app. Your app always runs on the latest version unless you choose to explicitly pin it back to a specific version. There will be a few versions supported at a time. If you pin to an invalid version that is no longer supported, your app will use the latest version instead. To always run the latest version, set `runtimeVersion` to ~1.
-
-### View and update the current runtime version
-
-You can change the runtime version used by your app. The new runtime version should take effect after restarting the app.
-
-#### View the current runtime version
-
-You can view the current version of the platform authentication middleware either using the Azure CLI or via one of the built-in version HTTP endpoints in your app.
-
-##### From the Azure CLI
-
-Using the Azure CLI, view the current middleware version with the [az webapp auth show](/cli/azure/webapp/auth#az_webapp_auth_show) command.
-
-```azurecli-interactive
-az webapp auth show --name <my_app_name> \
resource-group <my_resource_group>
-```
-
-In this code, replace `<my_app_name>` with the name of your app. Also replace `<my_resource_group>` with the name of the resource group for your app.
-
-You will see the `runtimeVersion` field in the CLI output. It will resemble the following example output, which has been truncated for clarity:
-```output
-{
- "additionalLoginParams": null,
- "allowedAudiences": null,
- ...
- "runtimeVersion": "1.3.2",
- ...
-}
-```
-
-##### From the version endpoint
-
-You can also hit /.auth/version endpoint on an app also to view the current middleware version that the app is running on. It will resemble the following example output:
-```output
-{
-"version": "1.3.2"
-}
-```
-
-#### Update the current runtime version
-
-Using the Azure CLI, you can update the `runtimeVersion` setting in the app with the [az webapp auth update](/cli/azure/webapp/auth#az_webapp_auth_update) command.
-
-```azurecli-interactive
-az webapp auth update --name <my_app_name> \
resource-group <my_resource_group> \runtime-version <version>
-```
-
-Replace `<my_app_name>` with the name of your app. Also replace `<my_resource_group>` with the name of the resource group for your app. Also, replace `<version>` with a valid version of the 1.x runtime or `~1` for the latest version. You can find the release notes on the different runtime versions [here] (https://github.com/Azure/app-service-announcements) to help determine the version to pin to.
-
-You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az_login) to sign in.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
app-service Configure Authentication Api Version https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-api-version.md
+
+ Title: Manage AuthN/AuthZ API versions
+description: Upgrade your App Service authentication API to V2 or pin it to a specific version, if needed.
+ Last updated : 03/29/2021+++
+# Manage the API and runtime versions of App Service authentication
+
+This article shows you how to customize the API and runtime versions of the built-in [authentication and authorization in App Service](overview-authentication-authorization.md).
+
+There are two versions of the management API for App Service authentication. The V2 version is required for the "Authentication" experience in the Azure portal. An app already using the V1 API can upgrade to the V2 version once a few changes have been made. Specifically, secret configuration must be moved to slot-sticky application settings. This can be done automatically from the "Authentication" section of the portal for your app.
+
+## Update the configuration version
+
+> [!WARNING]
+> Migration to V2 will disable management of the App Service Authentication / Authorization feature for your application through some clients, such as its existing experience in the Azure portal, Azure CLI, and Azure PowerShell. This cannot be reversed.
+
+The V2 API does not support creation or editing of Microsoft Account as a distinct provider as was done in V1. Rather, it leverages the converged [Microsoft Identity Platform](../active-directory/develop/v2-overview.md) to sign-in users with both Azure AD and personal Microsoft accounts. When switching to the V2 API, the V1 Azure Active Directory configuration is used to configure the Microsoft Identity Platform provider. The V1 Microsoft Account provider will be carried forward in the migration process and continue to operate as normal, but it is recommended that you move to the newer Microsoft Identity Platform model. See [Support for Microsoft Account provider registrations](#support-for-microsoft-account-provider-registrations) to learn more.
+
+The automated migration process will move provider secrets into application settings and then convert the rest of the configuration into the new format. To use the automatic migration:
+
+1. Navigate to your app in the portal and select the **Authentication** menu option.
+1. If the app is configured using the V1 model, you will see an **Upgrade** button.
+1. Review the description in the confirmation prompt. If you are ready to perform the migration, click **Upgrade** in the prompt.
+
+### Manually managing the migration
+
+The following steps will allow you to manually migrate the application to the V2 API if you do not wish to use the automatic version mentioned above.
+
+#### Moving secrets to application settings
+
+1. Get your existing configuration by using the V1 API:
+
+ ```azurecli
+ az webapp auth show -g <group_name> -n <site_name>
+ ```
+
+ In the resulting JSON payload, make note of the secret value used for each provider you have configured:
+
+ * AAD: `clientSecret`
+ * Google: `googleClientSecret`
+ * Facebook: `facebookAppSecret`
+ * Twitter: `twitterConsumerSecret`
+ * Microsoft Account: `microsoftAccountClientSecret`
+
+ > [!IMPORTANT]
+ > The secret values are important security credentials and should be handled carefully. Do not share these values or persist them on a local machine.
+
+1. Create slot-sticky application settings for each secret value. You may choose the name of each application setting. It's value should match what you obtained in the previous step or [reference a Key Vault secret](./app-service-key-vault-references.md?toc=/azure/azure-functions/toc.json) that you have created with that value.
+
+ To create the setting, you can use the Azure portal or run a variation of the following for each provider:
+
+ ```azurecli
+ # For Web Apps, Google example
+ az webapp config appsettings set -g <group_name> -n <site_name> --slot-settings GOOGLE_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step>
+
+ # For Azure Functions, Twitter example
+ az functionapp config appsettings set -g <group_name> -n <site_name> --slot-settings TWITTER_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step>
+ ```
+
+ > [!NOTE]
+ > The application settings for this configuration should be marked as slot-sticky, meaning that they will not move between environments during a [slot swap operation](./deploy-staging-slots.md). This is because your authentication configuration itself is tied to the environment.
+
+1. Create a new JSON file named `authsettings.json`.Take the output that you received previously and remove each secret value from it. Write the remaining output to the file, making sure that no secret is included. In some cases, the configuration may have arrays containing empty strings. Make sure that `microsoftAccountOAuthScopes` does not, and if it does, switch that value to `null`.
+
+1. Add a property to `authsettings.json` which points to the application setting name you created earlier for each provider:
+
+ * AAD: `clientSecretSettingName`
+ * Google: `googleClientSecretSettingName`
+ * Facebook: `facebookAppSecretSettingName`
+ * Twitter: `twitterConsumerSecretSettingName`
+ * Microsoft Account: `microsoftAccountClientSecretSettingName`
+
+ An example file after this operation might look similar to the following, in this case only configured for AAD:
+
+ ```json
+ {
+ "id": "/subscriptions/00d563f8-5b89-4c6a-bcec-c1b9f6d607e0/resourceGroups/myresourcegroup/providers/Microsoft.Web/sites/mywebapp/config/authsettings",
+ "name": "authsettings",
+ "type": "Microsoft.Web/sites/config",
+ "location": "Central US",
+ "properties": {
+ "enabled": true,
+ "runtimeVersion": "~1",
+ "unauthenticatedClientAction": "AllowAnonymous",
+ "tokenStoreEnabled": true,
+ "allowedExternalRedirectUrls": null,
+ "defaultProvider": "AzureActiveDirectory",
+ "clientId": "3197c8ed-2470-480a-8fae-58c25558ac9b",
+ "clientSecret": "",
+ "clientSecretSettingName": "MICROSOFT_IDENTITY_AUTHENTICATION_SECRET",
+ "clientSecretCertificateThumbprint": null,
+ "issuer": "https://sts.windows.net/0b2ef922-672a-4707-9643-9a5726eec524/",
+ "allowedAudiences": [
+ "https://mywebapp.azurewebsites.net"
+ ],
+ "additionalLoginParams": null,
+ "isAadAutoProvisioned": true,
+ "aadClaimsAuthorization": null,
+ "googleClientId": null,
+ "googleClientSecret": null,
+ "googleClientSecretSettingName": null,
+ "googleOAuthScopes": null,
+ "facebookAppId": null,
+ "facebookAppSecret": null,
+ "facebookAppSecretSettingName": null,
+ "facebookOAuthScopes": null,
+ "gitHubClientId": null,
+ "gitHubClientSecret": null,
+ "gitHubClientSecretSettingName": null,
+ "gitHubOAuthScopes": null,
+ "twitterConsumerKey": null,
+ "twitterConsumerSecret": null,
+ "twitterConsumerSecretSettingName": null,
+ "microsoftAccountClientId": null,
+ "microsoftAccountClientSecret": null,
+ "microsoftAccountClientSecretSettingName": null,
+ "microsoftAccountOAuthScopes": null,
+ "isAuthFromFile": "false"
+ }
+ }
+ ```
+
+1. Submit this file as the new Authentication/Authorization configuration for your app:
+
+ ```azurecli
+ az rest --method PUT --url "/subscriptions/<subscription_id>/resourceGroups/<group_name>/providers/Microsoft.Web/sites/<site_name>/config/authsettings?api-version=2020-06-01" --body @./authsettings.json
+ ```
+
+1. Validate that your app is still operating as expected after this gesture.
+
+1. Delete the file used in the previous steps.
+
+You have now migrated the app to store identity provider secrets as application settings.
+
+#### Support for Microsoft Account provider registrations
+
+If your existing configuration contains a Microsoft Account provider and does not contain an Azure Active Directory provider, you can switch the configuration over to the Azure Active Directory provider and then perform the migration. To do this:
+
+1. Go to [**App registrations**](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) in the Azure portal and find the registration associated with your Microsoft Account provider. It may be under the "Applications from personal account" heading.
+1. Navigate to the "Authentication" page for the registration. Under "Redirect URIs" you should see an entry ending in `/.auth/login/microsoftaccount/callback`. Copy this URI.
+1. Add a new URI that matches the one you just copied, except instead have it end in `/.auth/login/aad/callback`. This will allow the registration to be used by the App Service Authentication / Authorization configuration.
+1. Navigate to the App Service Authentication / Authorization configuration for your app.
+1. Collect the configuration for the Microsoft Account provider.
+1. Configure the Azure Active Directory provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
+1. Once you have saved the configuration, test the login flow by navigating in your browser to the `/.auth/login/aad` endpoint on your site and complete the sign-in flow.
+1. At this point, you have successfully copied the configuration over, but the existing Microsoft Account provider configuration remains. Before you remove it, make sure that all parts of your app reference the Azure Active Directory provider through login links, etc. Verify that all parts of your app work as expected.
+1. Once you have validated that things work against the AAD Azure Active Directory provider, you may remove the Microsoft Account provider configuration.
+
+> [!WARNING]
+> It is possible to converge the two registrations by modifying the [supported account types](../active-directory/develop/supported-accounts-validation.md) for the AAD app registration. However, this would force a new consent prompt for Microsoft Account users, and those users' identity claims may be different in structure, `sub` notably changing values since a new App ID is being used. This approach is not recommended unless thoroughly understood. You should instead wait for support for the two registrations in the V2 API surface.
+
+#### Switching to V2
+
+Once the above steps have been performed, navigate to the app in the Azure portal. Select the "Authentication (preview)" section.
+
+Alternatively, you may make a PUT request against the `config/authsettingsv2` resource under the site resource. The schema for the payload is the same as captured in [File-based configuration](configure-authentication-file-based.md).
+
+## Pin your app to a specific authentication runtime version
+
+When you enable Authentication / Authorization, platform middleware is injected into your HTTP request pipeline as described in the [feature overview](overview-authentication-authorization.md#how-it-works). This platform middleware is periodically updated with new features and improvements as part of routine platform updates. By default, your web or function app will run on the latest version of this platform middleware. These automatic updates are always backwards compatible. However, in the rare event that this automatic update introduces a runtime issue for your web or function app, you can temporarily roll back to the previous middleware version. This article explains how to temporarily pin an app to a specific version of the authentication middleware.
+
+### Automatic and manual version updates
+
+You can pin your app to a specific version of the platform middleware by setting a `runtimeVersion` setting for the app. Your app always runs on the latest version unless you choose to explicitly pin it back to a specific version. There will be a few versions supported at a time. If you pin to an invalid version that is no longer supported, your app will use the latest version instead. To always run the latest version, set `runtimeVersion` to ~1.
+
+### View and update the current runtime version
+
+You can change the runtime version used by your app. The new runtime version should take effect after restarting the app.
+
+#### View the current runtime version
+
+You can view the current version of the platform authentication middleware either using the Azure CLI or via one of the built-in version HTTP endpoints in your app.
+
+##### From the Azure CLI
+
+Using the Azure CLI, view the current middleware version with the [az webapp auth show](/cli/azure/webapp/auth#az_webapp_auth_show) command.
+
+```azurecli-interactive
+az webapp auth show --name <my_app_name> \
+--resource-group <my_resource_group>
+```
+
+In this code, replace `<my_app_name>` with the name of your app. Also replace `<my_resource_group>` with the name of the resource group for your app.
+
+You will see the `runtimeVersion` field in the CLI output. It will resemble the following example output, which has been truncated for clarity:
+```output
+{
+ "additionalLoginParams": null,
+ "allowedAudiences": null,
+ ...
+ "runtimeVersion": "1.3.2",
+ ...
+}
+```
+
+##### From the version endpoint
+
+You can also hit /.auth/version endpoint on an app also to view the current middleware version that the app is running on. It will resemble the following example output:
+```output
+{
+"version": "1.3.2"
+}
+```
+
+#### Update the current runtime version
+
+Using the Azure CLI, you can update the `runtimeVersion` setting in the app with the [az webapp auth update](/cli/azure/webapp/auth#az_webapp_auth_update) command.
+
+```azurecli-interactive
+az webapp auth update --name <my_app_name> \
+--resource-group <my_resource_group> \
+--runtime-version <version>
+```
+
+Replace `<my_app_name>` with the name of your app. Also replace `<my_resource_group>` with the name of the resource group for your app. Also, replace `<version>` with a valid version of the 1.x runtime or `~1` for the latest version. See the [release notes on the different runtime versions](https://github.com/Azure/app-service-announcements) to help determine the version to pin to.
+
+You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az_login) to sign in.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
app-service Configure Authentication Customize Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-customize-sign-in-out.md
+
+ Title: Customize sign-ins and sign-outs
+description: Use the built-in authentication and authorization in App Service and at the same time customize the sign-in and sign-out behavior.
+ Last updated : 03/29/2021++
+# Customize sign-in and sign-out in Azure App Service authentication
+
+This article shows you how to customize user sign-ins and sign-outs while using the built-in [authentication and authorization in App Service](overview-authentication-authorization.md).
+
+## Use multiple sign-in providers
+
+The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and Twitter). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows:
+
+First, in the **Authentication / Authorization** page in the Azure portal, configure each of the identity provider you want to enable.
+
+In **Action to take when request is not authenticated**, select **Allow Anonymous requests (no action)**.
+
+In the sign-in page, or the navigation bar, or any other location of your app, add a sign-in link to each of the providers you enabled (`/.auth/login/<provider>`). For example:
+
+```html
+<a href="/.auth/login/aad">Log in with the Microsoft Identity Platform</a>
+<a href="/.auth/login/facebook">Log in with Facebook</a>
+<a href="/.auth/login/google">Log in with Google</a>
+<a href="/.auth/login/twitter">Log in with Twitter</a>
+<a href="/.auth/login/apple">Log in with Apple</a>
+```
+
+When the user clicks on one of the links, the respective sign-in page opens to sign in the user.
+
+To redirect the user post-sign-in to a custom URL, use the `post_login_redirect_url` query string parameter (not to be confused with the Redirect URI in your identity provider configuration). For example, to navigate the user to `/Home/Index` after sign-in, use the following HTML code:
+
+```html
+<a href="/.auth/login/<provider>?post_login_redirect_url=/Home/Index">Log in</a>
+```
+
+## Validate tokens from providers
+
+In a client-directed sign-in, the application signs in the user to the provider manually and then submits the authentication token to App Service for validation (see [Authentication flow](overview-authentication-authorization.md#authentication-flow)). This validation itself doesn't actually grant you access to the desired app resources, but a successful validation will give you a session token that you can use to access app resources.
+
+To validate the provider token, App Service app must first be configured with the desired provider. At runtime, after you retrieve the authentication token from your provider, post the token to `/.auth/login/<provider>` for validation. For example:
+
+```
+POST https://<appname>.azurewebsites.net/.auth/login/aad HTTP/1.1
+Content-Type: application/json
+
+{"id_token":"<token>","access_token":"<token>"}
+```
+
+The token format varies slightly according to the provider. See the following table for details:
+
+| Provider value | Required in request body | Comments |
+|-|-|-|
+| `aad` | `{"access_token":"<access_token>"}` | |
+| `microsoftaccount` | `{"access_token":"<token>"}` | The `expires_in` property is optional. <br/>When requesting the token from Live services, always request the `wl.basic` scope. |
+| `google` | `{"id_token":"<id_token>"}` | The `authorization_code` property is optional. When specified, it can also optionally be accompanied by the `redirect_uri` property. |
+| `facebook`| `{"access_token":"<user_access_token>"}` | Use a valid [user access token](https://developers.facebook.com/docs/facebook-login/access-tokens) from Facebook. |
+| `twitter` | `{"access_token":"<access_token>", "access_token_secret":"<acces_token_secret>"}` | |
+| | | |
+
+If the provider token is validated successfully, the API returns with an `authenticationToken` in the response body, which is your session token.
+
+```json
+{
+ "authenticationToken": "...",
+ "user": {
+ "userId": "sid:..."
+ }
+}
+```
+
+Once you have this session token, you can access protected app resources by adding the `X-ZUMO-AUTH` header to your HTTP requests. For example:
+
+```
+GET https://<appname>.azurewebsites.net/api/products/1
+X-ZUMO-AUTH: <authenticationToken_value>
+```
+
+## Sign out of a session
+
+Users can initiate a sign-out by sending a `GET` request to the app's `/.auth/logout` endpoint. The `GET` request does the following:
+
+- Clears authentication cookies from the current session.
+- Deletes the current user's tokens from the token store.
+- For Azure Active Directory and Google, performs a server-side sign-out on the identity provider.
+
+Here's a simple sign-out link in a webpage:
+
+```html
+<a href="/.auth/logout">Sign out</a>
+```
+
+By default, a successful sign-out redirects the client to the URL `/.auth/logout/done`. You can change the post-sign-out redirect page by adding the `post_logout_redirect_uri` query parameter. For example:
+
+```
+GET /.auth/logout?post_logout_redirect_uri=/https://docsupdatetracker.net/index.html
+```
+
+It's recommended that you [encode](https://wikipedia.org/wiki/Percent-encoding) the value of `post_logout_redirect_uri`.
+
+When using fully qualified URLs, the URL must be either hosted in the same domain or configured as an allowed external redirect URL for your app. In the following example, to redirect to `https://myexternalurl.com` that's not hosted in the same domain:
+
+```
+GET /.auth/logout?post_logout_redirect_uri=https%3A%2F%2Fmyexternalurl.com
+```
+
+Run the following command in the [Azure Cloud Shell](../cloud-shell/quickstart.md):
+
+```azurecli-interactive
+az webapp auth update --name <app_name> --resource-group <group_name> --allowed-external-redirect-urls "https://myexternalurl.com"
+```
+
+## Preserve URL fragments
+
+After users sign in to your app, they usually want to be redirected to the same section of the same page, such as `/wiki/Main_Page#SectionZ`. However, because [URL fragments](https://wikipedia.org/wiki/Fragment_identifier) (for example, `#SectionZ`) are never sent to the server, they are not preserved by default after the OAuth sign-in completes and redirects back to your app. Users then get a suboptimal experience when they need to navigate to the desired anchor again. This limitation applies to all server-side authentication solutions.
+
+In App Service authentication, you can preserve URL fragments across the OAuth sign-in. To do this, set an app setting called `WEBSITE_AUTH_PRESERVE_URL_FRAGMENT` to `true`. You can do it in the [Azure portal](https://portal.azure.com), or simply run the following command in the [Azure Cloud Shell](../cloud-shell/quickstart.md):
+
+```azurecli-interactive
+az webapp config appsettings set --name <app_name> --resource-group <group_name> --settings WEBSITE_AUTH_PRESERVE_URL_FRAGMENT="true"
+```
+
+## Limit the domain of sign-in accounts
+
+Both Microsoft Account and Azure Active Directory lets you sign in from multiple domains. For example, Microsoft Account allows _outlook.com_, _live.com_, and _hotmail.com_ accounts. Azure AD allows any number of custom domains for the sign-in accounts. However, you may want to accelerate your users straight to your own branded Azure AD sign-in page (such as `contoso.com`). To suggest the domain name of the sign-in accounts, follow these steps.
+
+In [https://resources.azure.com](https://resources.azure.com), navigate to **subscriptions** > **_\<subscription\_name_** > **resourceGroups** > **_\<resource\_group\_name>_** > **providers** > **Microsoft.Web** > **sites** > **_\<app\_name>_** > **config** > **authsettings**.
+
+Click **Edit**, modify the following property, and then click **Put**. Be sure to replace _\<domain\_name>_ with the domain you want.
+
+```json
+"additionalLoginParams": ["domain_hint=<domain_name>"]
+```
+
+This setting appends the `domain_hint` query string parameter to the login redirect URL.
+
+> [!IMPORTANT]
+> It's possible for the client to remove the `domain_hint` parameter after receiving the redirect URL, and then login with a different domain. So while this function is convenient, it's not a security feature.
+>
+
+## Authorize or deny users
+
+While App Service takes care of the simplest authorization case (i.e. reject unauthenticated requests), your app may require more fine-grained authorization behavior, such as limiting access to only a specific group of users. In certain cases, you need to write custom application code to allow or deny access to the signed-in user. In other cases, App Service or your identity provider may be able to help without requiring code changes.
+
+- [Server level](#server-level-windows-apps-only)
+- [Identity provider level](#identity-provider-level)
+- [Application level](#application-level)
+
+### Server level (Windows apps only)
+
+For any Windows app, you can define authorization behavior of the IIS web server, by editing the *Web.config* file. Linux apps don't use IIS and can't be configured through *Web.config*.
+
+1. Navigate to `https://<app-name>.scm.azurewebsites.net/DebugConsole`
+
+1. In the browser explorer of your App Service files, navigate to *site/wwwroot*. If a *Web.config* doesn't exist, create it by selecting **+** > **New File**.
+
+1. Select the pencil for *Web.config* to edit it. Add the following configuration code and click **Save**. If *Web.config* already exists, just add the `<authorization>` element with everything in it. Add the accounts you want to allow in the `<allow>` element.
+
+ ```xml
+ <?xml version="1.0" encoding="utf-8"?>
+ <configuration>
+ <system.web>
+ <authorization>
+ <allow users="user1@contoso.com,user2@contoso.com"/>
+ <deny users="*"/>
+ </authorization>
+ </system.web>
+ </configuration>
+ ```
+
+### Identity provider level
+
+The identity provider may provide certain turn-key authorization. For example:
+
+- For [Azure App Service](configure-authentication-provider-aad.md), you can [manage enterprise-level access](../active-directory/manage-apps/what-is-access-management.md) directly in Azure AD. For instructions, see [How to remove a user's access to an application](../active-directory/manage-apps/methods-for-removing-user-access.md).
+- For [Google](configure-authentication-provider-google.md), Google API projects that belong to an [organization](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#organizations) can be configured to allow access only to users in your organization (see [Google's **Setting up OAuth 2.0** support page](https://support.google.com/cloud/answer/6158849?hl=en)).
+
+### Application level
+
+If either of the other levels don't provide the authorization you need, or if your platform or identity provider isn't supported, you must write custom code to authorize users based on the [user claims](configure-authentication-user-identities.md).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
app-service Configure Authentication File Based https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-file-based.md
+
+ Title: File-based configuration of AuthN/AuthZ
+description: Configure authentication and authorization in App Service using a configuration file to enable certain preview capabilities.
+ Last updated : 03/29/2021++
+# File-based configuration in Azure App Service authentication (Preview)
+
+With [App Service authentication](overview-authentication-authorization.md), the authentication settings can be configured with a file (Preview). You may need to use file-based configuration to use certain preview capabilities of App Service authentication / authorization.
+
+> [!IMPORTANT]
+> Remember that your app payload, and therefore this file, may move between environments, as with [slots](./deploy-staging-slots.md). It is likely you would want a different app registration pinned to each slot, and in these cases, you should continue to use the standard configuration method instead of using the configuration file.
+
+## Enabling file-based configuration
+
+> [!CAUTION]
+> During preview, enabling file-based configuration will disable management of the App Service Authentication / Authorization feature for your application through some clients, such as the Azure portal, Azure CLI, and Azure PowerShell.
+
+1. Create a new JSON file for your configuration at the root of your project (deployed to D:\home\site\wwwroot in your web / function app). Fill in your desired configuration according to the [file-based configuration reference](#configuration-file-reference). If modifying an existing Azure Resource Manager configuration, make sure to translate the properties captured in the `authsettings` collection into your configuration file.
+
+2. Modify the existing configuration, which is captured in the [Azure Resource Manager](../azure-resource-manager/management/overview.md) APIs under `Microsoft.Web/sites/<siteName>/config/authsettings`. To modify this, you can use an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) or a tool like [Azure Resource Explorer](https://resources.azure.com/). Within the authsettings collection, you will need to set three properties (and may remove others):
+
+ 1. Set `enabled` to "true"
+ 2. Set `isAuthFromFile` to "true"
+ 3. Set `authFilePath` to the name of the file (for example, "auth.json")
+
+> [!NOTE]
+> The format for `authFilePath` varies between platforms. On Windows, both relative and absolute paths are supported. Relative is recommended. For Linux, only absolute paths are supported currently, so the value of the setting should be "/home/site/wwwroot/auth.json" or similar.
+
+Once you have made this configuration update, the contents of the file will be used to define the behavior of App Service Authentication / Authorization for that site. If you ever wish to return to Azure Resource Manager configuration, you can do so by setting `isAuthFromFile` back to "false".
+
+## Configuration file reference
+
+Any secrets that will be referenced from your configuration file must be stored as [application settings](./configure-common.md#configure-app-settings). You may name the settings anything you wish. Just make sure that the references from the configuration file uses the same keys.
+
+The following exhausts possible configuration options within the file:
+
+```json
+{
+ "platform": {
+ "enabled": <true|false>
+ },
+ "globalValidation": {
+ "unauthenticatedClientAction": "RedirectToLoginPage|AllowAnonymous|Return401|Return403",
+ "redirectToProvider": "<default provider alias>",
+ "excludedPaths": [
+ "/path1",
+ "/path2"
+ ]
+ },
+ "httpSettings": {
+ "requireHttps": <true|false>,
+ "routes": {
+ "apiPrefix": "<api prefix>"
+ },
+ "forwardProxy": {
+ "convention": "NoProxy|Standard|Custom",
+ "customHostHeaderName": "<host header value>",
+ "customProtoHeaderName": "<proto header value>"
+ }
+ },
+ "login": {
+ "routes": {
+ "logoutEndpoint": "<logout endpoint>"
+ },
+ "tokenStore": {
+ "enabled": <true|false>,
+ "tokenRefreshExtensionHours": "<double>",
+ "fileSystem": {
+ "directory": "<directory to store the tokens in if using a file system token store (default)>"
+ },
+ "azureBlobStorage": {
+ "sasUrlSettingName": "<app setting name containing the sas url for the Azure Blob Storage if opting to use that for a token store>"
+ }
+ },
+ "preserveUrlFragmentsForLogins": <true|false>,
+ "allowedExternalRedirectUrls": [
+ "https://uri1.azurewebsites.net/",
+ "https://uri2.azurewebsites.net/",
+ "url_scheme_of_your_app://easyauth.callback"
+ ],
+ "cookieExpiration": {
+ "convention": "FixedTime|IdentityDerived",
+ "timeToExpiration": "<timespan>"
+ },
+ "nonce": {
+ "validateNonce": <true|false>,
+ "nonceExpirationInterval": "<timespan>"
+ }
+ },
+ "identityProviders": {
+ "azureActiveDirectory": {
+ "enabled": <true|false>,
+ "registration": {
+ "openIdIssuer": "<issuer url>",
+ "clientId": "<app id>",
+ "clientSecretSettingName": "APP_SETTING_CONTAINING_AAD_SECRET",
+ },
+ "login": {
+ "loginParameters": [
+ "paramName1=value1",
+ "paramName2=value2"
+ ]
+ },
+ "validation": {
+ "allowedAudiences": [
+ "audience1",
+ "audience2"
+ ]
+ }
+ },
+ "facebook": {
+ "enabled": <true|false>,
+ "registration": {
+ "appId": "<app id>",
+ "appSecretSettingName": "APP_SETTING_CONTAINING_FACEBOOK_SECRET"
+ },
+ "graphApiVersion": "v3.3",
+ "login": {
+ "scopes": [
+ "public_profile",
+ "email"
+ ]
+ },
+ },
+ "gitHub": {
+ "enabled": <true|false>,
+ "registration": {
+ "clientId": "<client id>",
+ "clientSecretSettingName": "APP_SETTING_CONTAINING_GITHUB_SECRET"
+ },
+ "login": {
+ "scopes": [
+ "profile",
+ "email"
+ ]
+ }
+ },
+ "google": {
+ "enabled": true,
+ "registration": {
+ "clientId": "<client id>",
+ "clientSecretSettingName": "APP_SETTING_CONTAINING_GOOGLE_SECRET"
+ },
+ "login": {
+ "scopes": [
+ "profile",
+ "email"
+ ]
+ },
+ "validation": {
+ "allowedAudiences": [
+ "audience1",
+ "audience2"
+ ]
+ }
+ },
+ "twitter": {
+ "enabled": <true|false>,
+ "registration": {
+ "consumerKey": "<consumer key>",
+ "consumerSecretSettingName": "APP_SETTING_CONTAINING TWITTER_CONSUMER_SECRET"
+ }
+ },
+ "apple": {
+ "enabled": <true|false>,
+ "registration": {
+ "clientId": "<client id>",
+ "clientSecretSettingName": "APP_SETTING_CONTAINING_APPLE_SECRET"
+ },
+ "login": {
+ "scopes": [
+ "profile",
+ "email"
+ ]
+ }
+ },
+ "openIdConnectProviders": {
+ "<providerName>": {
+ "enabled": <true|false>,
+ "registration": {
+ "clientId": "<client id>",
+ "clientCredential": {
+ "clientSecretSettingName": "<name of app setting containing client secret>"
+ },
+ "openIdConnectConfiguration": {
+ "authorizationEndpoint": "<url specifying authorization endpoint>",
+ "tokenEndpoint": "<url specifying token endpoint>",
+ "issuer": "<url specifying issuer>",
+ "certificationUri": "<url specifying jwks endpoint>",
+ "wellKnownOpenIdConfiguration": "<url specifying .well-known/open-id-configuration endpoint - if this property is set, the other properties of this object are ignored, and authorizationEndpoint, tokenEndpoint, issuer, and certificationUri are set to the corresponding values listed at this endpoint>"
+ }
+ },
+ "login": {
+ "nameClaimType": "<name of claim containing name>",
+ "scopes": [
+ "openid",
+ "profile",
+ "email"
+ ],
+ "loginParameterNames": [
+ "paramName1=value1",
+ "paramName2=value2"
+ ],
+ }
+ },
+ //...
+ }
+ }
+}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
app-service Configure Authentication Oauth Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-oauth-tokens.md
+
+ Title: OAuth tokens in AuthN/AuthZ
+description: Learn how to retrieve tokens and refresh tokens and extend sessions when using the built-in authentication and authorization in App Service.
+ Last updated : 03/29/2021++
+# Work with OAuth tokens in Azure App Service authentication
+
+This article shows you how to work with OAuth tokens while using the built-in [authentication and authorization in App Service](overview-authentication-authorization.md).
+
+## Retrieve tokens in app code
+
+From your server code, the provider-specific tokens are injected into the request header, so you can easily access them. The following table shows possible token header names:
+
+| Provider | Header names |
+|-|-|
+| Azure Active Directory | `X-MS-TOKEN-AAD-ID-TOKEN` <br/> `X-MS-TOKEN-AAD-ACCESS-TOKEN` <br/> `X-MS-TOKEN-AAD-EXPIRES-ON` <br/> `X-MS-TOKEN-AAD-REFRESH-TOKEN` |
+| Facebook Token | `X-MS-TOKEN-FACEBOOK-ACCESS-TOKEN` <br/> `X-MS-TOKEN-FACEBOOK-EXPIRES-ON` |
+| Google | `X-MS-TOKEN-GOOGLE-ID-TOKEN` <br/> `X-MS-TOKEN-GOOGLE-ACCESS-TOKEN` <br/> `X-MS-TOKEN-GOOGLE-EXPIRES-ON` <br/> `X-MS-TOKEN-GOOGLE-REFRESH-TOKEN` |
+| Twitter | `X-MS-TOKEN-TWITTER-ACCESS-TOKEN` <br/> `X-MS-TOKEN-TWITTER-ACCESS-TOKEN-SECRET` |
+|||
+
+From your client code (such as a mobile app or in-browser JavaScript), send an HTTP `GET` request to `/.auth/me` ([token store](overview-authentication-authorization.md#token-store) must be enabled). The returned JSON has the provider-specific tokens.
+
+> [!NOTE]
+> Access tokens are for accessing provider resources, so they are present only if you configure your provider with a client secret. To see how to get refresh tokens, see Refresh access tokens.
+
+## Refresh auth tokens
+
+When your provider's access token (not the [session token](#extend-session-token-expiration-grace-period)) expires, you need to reauthenticate the user before you use that token again. You can avoid token expiration by making a `GET` call to the `/.auth/refresh` endpoint of your application. When called, App Service automatically refreshes the access tokens in the [token store](overview-authentication-authorization.md#token-store) for the authenticated user. Subsequent requests for tokens by your app code get the refreshed tokens. However, for token refresh to work, the token store must contain [refresh tokens](https://auth0.com/learn/refresh-tokens/) for your provider. The way to get refresh tokens are documented by each provider, but the following list is a brief summary:
+
+- **Google**: Append an `access_type=offline` query string parameter to your `/.auth/login/google` API call. If using the Mobile Apps SDK, you can add the parameter to one of the `LogicAsync` overloads (see [Google Refresh Tokens](https://developers.google.com/identity/protocols/OpenIDConnect#refresh-tokens)).
+- **Facebook**: Doesn't provide refresh tokens. Long-lived tokens expire in 60 days (see [Facebook Expiration and Extension of Access Tokens](https://developers.facebook.com/docs/facebook-login/access-tokens/expiration-and-extension)).
+- **Twitter**: Access tokens don't expire (see [Twitter OAuth FAQ](https://developer.twitter.com/en/docs/authentication/faq)).
+- **Azure Active Directory**: In [https://resources.azure.com](https://resources.azure.com), do the following steps:
+ 1. At the top of the page, select **Read/Write**.
+ 2. In the left browser, navigate to **subscriptions** > **_\<subscription\_name_** > **resourceGroups** > **_\<resource\_group\_name>_** > **providers** > **Microsoft.Web** > **sites** > **_\<app\_name>_** > **config** > **authsettings**.
+ 3. Click **Edit**.
+ 4. Modify the following property. Replace _\<app\_id>_ with the Azure Active Directory application ID of the service you want to access.
+
+ ```json
+ "additionalLoginParams": ["response_type=code id_token", "resource=<app_id>"]
+ ```
+
+ 5. Click **Put**.
+
+Once your provider is configured, you can [find the refresh token and the expiration time for the access token](#retrieve-tokens-in-app-code) in the token store.
+
+To refresh your access token at any time, just call `/.auth/refresh` in any language. The following snippet uses jQuery to refresh your access tokens from a JavaScript client.
+
+```javascript
+function refreshTokens() {
+ let refreshUrl = "/.auth/refresh";
+ $.ajax(refreshUrl) .done(function() {
+ console.log("Token refresh completed successfully.");
+ }) .fail(function() {
+ console.log("Token refresh failed. See application logs for details.");
+ });
+}
+```
+
+If a user revokes the permissions granted to your app, your call to `/.auth/me` may fail with a `403 Forbidden` response. To diagnose errors, check your application logs for details.
+
+## Extend session token expiration grace period
+
+The authenticated session expires after 8 hours. After an authenticated session expires, there is a 72-hour grace period by default. Within this grace period, you're allowed to refresh the session token with App Service without reauthenticating the user. You can just call `/.auth/refresh` when your session token becomes invalid, and you don't need to track token expiration yourself. Once the 72-hour grace period is lapses, the user must sign in again to get a valid session token.
+
+If 72 hours isn't enough time for you, you can extend this expiration window. Extending the expiration over a long period could have significant security implications (such as when an authentication token is leaked or stolen). So you should leave it at the default 72 hours or set the extension period to the smallest value.
+
+To extend the default expiration window, run the following command in the [Cloud Shell](../cloud-shell/overview.md).
+
+```azurecli-interactive
+az webapp auth update --resource-group <group_name> --name <app_name> --token-refresh-extension-hours <hours>
+```
+
+> [!NOTE]
+> The grace period only applies to the App Service authenticated session, not the tokens from the identity providers. There is no grace period for the expired provider tokens.
+>
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-provider-aad.md
To register the app, perform the following steps:
|Field|Description| |-|-| |Application (client) ID| Use the **Application (client) ID** of the app registration. |
- |Client Secret (Optional)| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the App Service will return access and refresh tokens. When the client secret is not set, implicit flow is used and only an id token is returned. These tokens are sent by the provider and stored in the EasyAuth token store.|
+ |Client Secret (Optional)| Use the client secret you generated in the app registration. With a client secret, hybrid flow is used and the App Service will return access and refresh tokens. When the client secret is not set, implicit flow is used and only an ID token is returned. These tokens are sent by the provider and stored in the EasyAuth token store.|
|Issuer Url| Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with the **Directory (tenant) ID** in which the app registration was created. This value is used to redirect users to the correct Azure AD tenant, as well as to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. For applications that use Azure AD v1 and for Azure Functions apps, omit `/v2.0` in the URL.| |Allowed Token Audiences| If this is a cloud or server app and you want to allow authentication tokens from a web app, add the **Application ID URI** of the web app here. The configured **Client ID** is *always* implicitly considered to be an allowed audience.|
At present, this allows _any_ client application in your Azure AD tenant to requ
1. Under **Application permissions**, select the App Role you created earlier, and then select **Add permissions**. 1. Make sure to click **Grant admin consent** to authorize the client application to request the permission. 1. Similar to the previous scenario (before any roles were added), you can now [request an access token](../active-directory/azuread-dev/v1-oauth2-client-creds-grant-flow.md#first-case-access-token-request-with-a-shared-secret) for the same target `resource`, and the access token will include a `roles` claim containing the App Roles that were authorized for the client application.
-1. Within the target App Service or Function app code, you can now validate that the expected roles are present in the token (this is not performed by App Service Authentication / Authorization). For more information, see [Access user claims](app-service-authentication-how-to.md#access-user-claims).
+1. Within the target App Service or Function app code, you can now validate that the expected roles are present in the token (this is not performed by App Service Authentication / Authorization). For more information, see [Access user claims](configure-authentication-user-identities.md#access-user-claims-in-app-code).
You have now configured a daemon client application that can access your App Service app using its own identity.
app-service Configure Authentication Provider Apple https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-provider-apple.md
You'll need to create an App ID and a service ID in the Apple Developer portal.
![Registering a new service identifier in the Apple Developer Portal](media/configure-authentication-provider-apple/apple-register-service.jpg) 8. On the **Register a Services ID** page, provide a description and an identifier. The description is what will be shown to the user on the consent screen. The identifier will be your client ID used in configuring the Apple provider with your app service. Then select **Configure**. ![Providing a description and an identifier](media/configure-authentication-provider-apple/apple-configure-service-1.jpg)
-9. On the pop-up window, set the Primary App Id to the App Id you created earlier. Specify your application's domain in the domain section. For the return URL, use the URL `<app-url>/.auth/login/apple/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/apple/callback`. Then select **Add** and **Save**.
+9. On the pop-up window, set the Primary App ID to the App ID you created earlier. Specify your application's domain in the domain section. For the return URL, use the URL `<app-url>/.auth/login/apple/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/apple/callback`. Then select **Add** and **Save**.
![Specifying the domain and return URL for the registration](media/configure-authentication-provider-apple/apple-configure-service-2.jpg) 10. Review the service registration information and select **Save**.
Add the client secret as an [application setting](./configure-common.md#configur
## <a name="configure"> </a>Add provider information to your application > [!NOTE]
-> The required configuration is in a new API format, currently only supported by [file-based configuration (preview)](.\app-service-authentication-how-to.md#config-file). You will need to follow the below steps using such a file.
+> The required configuration is in a new API format, currently only supported by [file-based configuration (preview)](configure-authentication-file-based.md). You will need to follow the below steps using such a file.
This section will walk you through updating the configuration to include your new IDP. An example configuration follows.
This section will walk you through updating the configuration to include your ne
```json "apple" : { "registration" : {
- "clientId": "<client id>",
+ "clientId": "<client ID>",
"clientSecretSettingName": "APP_SETTING_CONTAINING_APPLE_CLIENT_SECRET" }, "login": {
app-service Configure Authentication Provider Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-provider-openid-connect.md
If you are unable to use a configuration metadata document, you will need to gat
## <a name="configure"> </a>Add provider information to your application > [!NOTE]
-> The required configuration is in a new API format, currently only supported by [file-based configuration (preview)](.\app-service-authentication-how-to.md#config-file). You will need to follow the below steps using such a file.
+> The required configuration is in a new API format, currently only supported by [file-based configuration (preview)](configure-authentication-file-based.md). You will need to follow the below steps using such a file.
This section will walk you through updating the configuration to include your new IDP. An example configuration follows.
app-service Configure Authentication User Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-authentication-user-identities.md
+
+ Title: User identities in AuthN/AuthZ
+description: Learn how to access user identities when using the built-in authentication and authorization in App Service.
+ Last updated : 03/29/2021++
+# Work with user identities in Azure App Service authentication
+
+This article shows you how to work with user identities when using the the built-in [authentication and authorization in App Service](overview-authentication-authorization.md).
+
+## Access user claims in app code
+
+For all language frameworks, App Service makes the claims in the incoming token (whether from an authenticated end user or a client application) available to your code by injecting them into the request headers. External requests aren't allowed to set these headers, so they are present only if set by App Service. Some example headers include:
+
+* X-MS-CLIENT-PRINCIPAL-NAME
+* X-MS-CLIENT-PRINCIPAL-ID
+
+Code that is written in any language or framework can get the information that it needs from these headers.
+
+For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java.md#authenticate-users-easy-auth).
+
+For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` is not populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. See [working with client identities in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities) for more information.
+
+For .NET Core, [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) supports populating the current user with App Service authentication. To learn more, you can read about it on the [Microsoft.Identity.Web wiki](https://github.com/AzureAD/microsoft-identity-web/wiki/1.2.0#integration-with-azure-app-services-authentication-of-web-apps-running-with-microsoftidentityweb), or see it demonstrated in [this tutorial for a web app accessing Microsoft Graph](/azure/app-service/scenario-secure-app-access-microsoft-graph-as-user?tabs=command-line#install-client-library-packages).
+
+## Access user claims using the API
+
+If the [token store](overview-authentication-authorization.md#token-store) is enabled for your app, you can also obtain additional details on the authenticated user by calling `/.auth/me`. The Mobile Apps server SDKs provide helper methods to work with this data. For more information, see [How to use the Azure Mobile Apps Node.js SDK](/previous-versions/azure/app-service-mobile/app-service-mobile-node-backend-how-to-use-server-sdk#howto-tables-getidentity), and [Work with the .NET backend server SDK for Azure Mobile Apps](/previous-versions/azure/app-service-mobile/app-service-mobile-dotnet-backend-how-to-use-server-sdk#user-info).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-language-java.md
Otherwise, your deployment method will depend on your archive type:
To deploy .jar files to Java SE, use the `/api/zipdeploy/` endpoint of the Kudu site. For more information on this API, please see [this documentation](./deploy-zip.md#rest). > [!NOTE]
-> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](/azure/app-service/faq-app-service-linux#built-in-images) textbox in the Configuration section of the Portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
+> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](/app-service/faq-app-service-linux#built-in-images) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
### Tomcat
To deploy .war files to Tomcat, use the `/api/wardeploy/` endpoint to POST your
To deploy .war files to JBoss, use the `/api/wardeploy/` endpoint to POST your archive file. For more information on this API, please see [this documentation](./deploy-zip.md#deploy-war-file).
-To deploy .ear files, [use FTP](deploy-ftp.md). Your .ear application wil be deployed to the context root defined in your application's configuration. For example, if the context root of your app is `<context-root>myapp</context-root>`, then you can browse the site at the `/myapp` path: `http://my-app-name.azurewebsites.net/myapp`. If you want you web app to be served in the root path, ensure that your app sets the context root to the root path: `<context-root>/</context-root>`. For more information, see [Setting the context root of a web application](https://docs.jboss.org/jbossas/guides/webguide/r2/en/html/ch06.html).
+To deploy .ear files, [use FTP](deploy-ftp.md). Your .ear application will be deployed to the context root defined in your application's configuration. For example, if the context root of your app is `<context-root>myapp</context-root>`, then you can browse the site at the `/myapp` path: `http://my-app-name.azurewebsites.net/myapp`. If you want you web app to be served in the root path, ensure that your app sets the context root to the root path: `<context-root>/</context-root>`. For more information, see [Setting the context root of a web application](https://docs.jboss.org/jbossas/guides/webguide/r2/en/html/ch06.html).
::: zone-end
Java applications running in App Service have the same set of [security best pra
### Authenticate users (Easy Auth)
-Set up app authentication in the Azure portal with the **Authentication and Authorization** option. From there, you can enable authentication using Azure Active Directory or social logins like Facebook, Google, or GitHub. Azure portal configuration only works when configuring a single authentication provider. For more information, see [Configure your App Service app to use Azure Active Directory login](configure-authentication-provider-aad.md) and the related articles for other identity providers. If you need to enable multiple sign-in providers, follow the instructions in the [customize App Service authentication](app-service-authentication-how-to.md) article.
+Set up app authentication in the Azure portal with the **Authentication and Authorization** option. From there, you can enable authentication using Azure Active Directory or social logins like Facebook, Google, or GitHub. Azure portal configuration only works when configuring a single authentication provider. For more information, see [Configure your App Service app to use Azure Active Directory login](configure-authentication-provider-aad.md) and the related articles for other identity providers. If you need to enable multiple sign-in providers, follow the instructions in the [customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md) article.
#### Java SE
for (Object key : map.keySet()) {
} ```
-To sign users out, use the `/.auth/ext/logout` path. To perform other actions, please see the documentation on [App Service Authentication and Authorization usage](./app-service-authentication-how-to.md). There is also official documentation on the Tomcat [HttpServletRequest interface](https://tomcat.apache.org/tomcat-5.5-doc/servletapi/javax/servlet/http/HttpServletRequest.html) and its methods. The following servlet methods are also hydrated based on your App Service configuration:
+To sign users out, use the `/.auth/ext/logout` path. To perform other actions, please see the documentation on [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md). There is also official documentation on the Tomcat [HttpServletRequest interface](https://tomcat.apache.org/tomcat-5.5-doc/servletapi/javax/servlet/http/HttpServletRequest.html) and its methods. The following servlet methods are also hydrated based on your App Service configuration:
```java public boolean isSecure()
Additional configuration may be necessary for encrypting your JDBC connection wi
#### Initialize the Java Key Store
-To initialize the `import java.security.KeyStore` object, load the keystore file with the password. The default password for both key stores is "changeit".
+To initialize the `import java.security.KeyStore` object, load the keystore file with the password. The default password for both key stores is `changeit`.
```java KeyStore keyStore = KeyStore.getInstance("jks");
This section shows how to connect Java applications deployed on Azure App Servic
Azure Monitor application insights is a cloud native application monitoring service which enables customers to observe failures, bottlenecks, and usage patterns to improve application performance and reduce mean time to resolution (MTTR). With a few clicks or CLI commands, you can enable monitoring for your Node.js or Java apps, auto-collecting logs, metrics, and distributed traces, eliminating the need for including an SDK in your app.
-#### Azure Portal
+#### Azure portal
-To enable Application Insights from the Azure Portal, go to **Application Insights** on the left-side menu and select **Turn on Application Insights**. By default, a new application insights resource of the same name as your Web App will be used. You can choose to use an existing application insights resource, or change the name. Click **Apply** at the bottom
+To enable Application Insights from the Azure portal, go to **Application Insights** on the left-side menu and select **Turn on Application Insights**. By default, a new application insights resource of the same name as your Web App will be used. You can choose to use an existing application insights resource, or change the name. Click **Apply** at the bottom
#### Azure CLI
-To enable via the Azure CLI, you will need to create an Application Insights resource and set a couple app settings on the Portal to connect Application Insights to your web app.
+To enable via the Azure CLI, you will need to create an Application Insights resource and set a couple app settings on the Azure portal to connect Application Insights to your web app.
1. Enable the Applications Insights extension
There are three core steps when [registering a data source with JBoss EAP](https
data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=${POSTGRES_CONNECTION_URL,env.POSTGRES_CONNECTION_URL:jdbc:postgresql://db:5432/postgres} --user-name=${POSTGRES_SERVER_ADMIN_FULL_NAME,env.POSTGRES_SERVER_ADMIN_FULL_NAME:postgres} --password=${POSTGRES_SERVER_ADMIN_PASSWORD,env.POSTGRES_SERVER_ADMIN_PASSWORD:example} --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker ```
-1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The example below shows how to call your `jboss-cli-commands.cli`. Later you will configre App Service to run this script when the container starts.
+1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The example below shows how to call your `jboss-cli-commands.cli`. Later you will configure App Service to run this script when the container starts.
```bash $JBOSS_HOME/bin/jboss-cli.sh --connect --file=/home/site/deployments/tools/jboss-cli-commands.cli ``` 1. Using an FTP client of your choice, upload your JDBC driver, `jboss-cli-commands.cli`, `startup_script.sh`, and the module definition to `/site/deployments/tools/`.
-2. Configure your site to run `startup_script.sh` when the container starts. In the Azure Portal, navigate to **Configuration** > **General Settings** > **Startup Command**. Set the startup command field to `/home/site/deployments/tools/startup_script.sh`. **Save** your changes.
+2. Configure your site to run `startup_script.sh` when the container starts. In the Azure portal, navigate to **Configuration** > **General Settings** > **Startup Command**. Set the startup command field to `/home/site/deployments/tools/startup_script.sh`. **Save** your changes.
To confirm that the datasource was added to the JBoss server, SSH into your webapp and run `$JBOSS_HOME/bin/jboss-cli.sh --connect`. Once you are connected to JBoss run the `/subsystem=datasources:read-resource` to print a list of the data sources.
app-service Manage Scale Per App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/manage-scale-per-app.md
When using App Service, you can scale your apps by scaling the [App Service plan
App Service plan that hosts it. This way, an App Service plan can be scaled to 10 instances, but an app can be set to use only five. > [!NOTE]
-> Per-app scaling is available only for **Standard**, **Premium**, **Premium V2** and **Isolated** pricing tiers.
+> Per-app scaling is available only for **Standard**, **Premium**, **Premium V2**, **Premium V3**, and **Isolated** pricing tiers.
> Apps are allocated to available App Service plan using a best effort approach for an even distribution across instances. While an even distribution is not guaranteed, the platform will make sure that two instances of the same app will not be hosted on the same App Service plan instance.
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-authentication-authorization.md
App Service can be used for authentication with or without restricting access to
[Authorization behavior](#authorization-behavior)
-[User and Application claims](#user-and-application-claims)
- [Token store](#token-store) [Logging and tracing](#logging-and-tracing)
The authentication flow is the same for all providers, but differs depending on
- Without provider SDK: The application delegates federated sign-in to App Service. This is typically the case with browser apps, which can present the provider's login page to the user. The server code manages the sign-in process, so it is also called _server-directed flow_ or _server flow_. This case applies to browser apps. It also applies to native apps that sign users in using the Mobile Apps client SDK because the SDK opens a web view to sign users in with App Service authentication. - With provider SDK: The application signs users in to the provider manually and then submits the authentication token to App Service for validation. This is typically the case with browser-less apps, which can't present the provider's sign-in page to the user. The application code manages the sign-in process, so it is also called _client-directed flow_ or _client flow_. This case applies to REST APIs, [Azure Functions](../azure-functions/functions-overview.md), and JavaScript browser clients, as well as browser apps that need more flexibility in the sign-in process. It also applies to native mobile apps that sign users in using the provider's SDK.
-Calls from a trusted browser app in App Service to another REST API in App Service or [Azure Functions](../azure-functions/functions-overview.md) can be authenticated using the server-directed flow. For more information, see [Customize authentication and authorization in App Service](app-service-authentication-how-to.md).
+Calls from a trusted browser app in App Service to another REST API in App Service or [Azure Functions](../azure-functions/functions-overview.md) can be authenticated using the server-directed flow. For more information, see [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md).
The table below shows the steps of the authentication flow. | Step | Without provider SDK | With provider SDK | | - | - | - | | 1. Sign user in | Redirects client to `/.auth/login/<provider>`. | Client code signs user in directly with provider's SDK and receives an authentication token. For information, see the provider's documentation. |
-| 2. Post-authentication | Provider redirects client to `/.auth/login/<provider>/callback`. | Client code [posts token from provider](app-service-authentication-how-to.md#validate-tokens-from-providers) to `/.auth/login/<provider>` for validation. |
+| 2. Post-authentication | Provider redirects client to `/.auth/login/<provider>/callback`. | Client code [posts token from provider](configure-authentication-customize-sign-in-out.md#validate-tokens-from-providers) to `/.auth/login/<provider>` for validation. |
| 3. Establish authenticated session | App Service adds authenticated cookie to response. | App Service returns its own authentication token to client code. | | 4. Serve authenticated content | Client includes authentication cookie in subsequent requests (automatically handled by browser). | Client code presents authentication token in `X-ZUMO-AUTH` header (automatically handled by Mobile Apps client SDKs). |
In the [Azure portal](https://portal.azure.com), you can configure App Service w
This option defers authorization of unauthenticated traffic to your application code. For authenticated requests, App Service also passes along authentication information in the HTTP headers.
-This option provides more flexibility in handling anonymous requests. For example, it lets you [present multiple sign-in providers](app-service-authentication-how-to.md#use-multiple-sign-in-providers) to your users. However, you must write code.
+This option provides more flexibility in handling anonymous requests. For example, it lets you [present multiple sign-in providers](configure-authentication-customize-sign-in-out.md#use-multiple-sign-in-providers) to your users. However, you must write code.
**Require authentication** This option will reject any unauthenticated traffic to your application. This rejection can be a redirect action to one of the configured identity providers. In these cases, a browser client is redirected to `/.auth/login/<provider>` for the provider you choose. If the anonymous request comes from a native mobile app, the returned response is an `HTTP 401 Unauthorized`. You can also configure the rejection to be an `HTTP 401 Unauthorized` or `HTTP 403 Forbidden` for all requests.
-With this option, you don't need to write any authentication code in your app. Finer authorization, such as role-specific authorization, can be handled by inspecting the user's claims (see [Access user claims](app-service-authentication-how-to.md#access-user-claims)).
+With this option, you don't need to write any authentication code in your app. Finer authorization, such as role-specific authorization, can be handled by inspecting the user's claims (see [Access user claims](configure-authentication-user-identities.md)).
> [!CAUTION] > Restricting access in this way applies to all calls to your app, which may not be desirable for apps wanting a publicly available home page, as in many single-page applications.
With this option, you don't need to write any authentication code in your app. F
> [!NOTE] > By default, any user in your Azure AD tenant can request a token for your application from Azure AD. You can [configure the application in Azure AD](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) if you want to restrict access to your app to a defined set of users. -
-#### User and Application claims
-
-For all language frameworks, App Service makes the claims in the incoming token (whether that be from an authenticated end user or a client application) available to your code by injecting them into the request headers. For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java.md#authenticate-users-easy-auth).
-
-For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` is not populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. See [working with client identities](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities) for more information.
-
-For more information, see [Access user claims](app-service-authentication-how-to.md#access-user-claims).
-
-For .NET Core, [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web/) supports populating the current user with the Authentication/Authorization feature. To learn more, you can read about it on the [Microsoft.Identity.Web wiki](https://github.com/AzureAD/microsoft-identity-web/wiki/1.2.0#integration-with-azure-app-services-authentication-of-web-apps-running-with-microsoftidentityweb), or see it demonstrated in [this tutorial for a web app accessing Microsoft Graph](./scenario-secure-app-access-microsoft-graph-as-user.md?tabs=command-line#install-client-library-packages).
- #### Token store App Service provides a built-in token store, which is a repository of tokens that are associated with the users of your web apps, APIs, or native mobile apps. When you enable authentication with any provider, this token store is immediately available to your app. If your application code needs to access data from these providers on the user's behalf, such as:
App Service provides a built-in token store, which is a repository of tokens tha
- post to the authenticated user's Facebook timeline - read the user's corporate data using the Microsoft Graph API
-You typically must write code to collect, store, and refresh these tokens in your application. With the token store, you just [retrieve the tokens](app-service-authentication-how-to.md#retrieve-tokens-in-app-code) when you need them and [tell App Service to refresh them](app-service-authentication-how-to.md#refresh-identity-provider-tokens) when they become invalid.
+You typically must write code to collect, store, and refresh these tokens in your application. With the token store, you just [retrieve the tokens](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code) when you need them and [tell App Service to refresh them](configure-authentication-oauth-tokens.md#refresh-auth-tokens) when they become invalid.
The ID tokens, access tokens, and refresh tokens are cached for the authenticated session, and they're accessible only by the associated user.
If you [enable application logging](troubleshoot-diagnostic-logs.md), you will s
## More resources - [How-To: Configure your App Service or Azure Functions app to use Azure AD login](configure-authentication-provider-aad.md)-- [Advanced usage of authentication and authorization in Azure App Service](app-service-authentication-how-to.md)
+- [Customize sign-ins and sign-outs](configure-authentication-customize-sign-in-out.md)
+<!-
+- [Work with OAuth tokens and sessions](configure-authentication-oauth-tokens.md)
+- [Access user and application claims](configure-authentication-user-identities.md)
+- [File-based configuration](configure-authentication-file-based.md)
Samples: - [Tutorial: Add authentication to your web app running on Azure App Service](scenario-secure-app-authentication-app-service.md)
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/reference-app-settings.md
+
+ Title: Environment variables and app settings reference
+description: Describes the commonly used environment variables, and which ones can be modified with app settings.
+ Last updated : 06/14/2021++
+# Environment variables and app settings in Azure App Service
+
+In [Azure App Service](overview.md), certain settings are available to the deployment or runtime environment as environment variables. Some of these settings can be customized when you set them manually as [app settings](configure-common.md#configure-app-settings). This reference shows the variables you can use or customize.
+
+## App environment
+
+The following environment variables are related to the app environment in general.
+
+| Setting name| Description | Example |
+|-|-|-|
+| `WEBSITE_SITE_NAME` | Read-only. App name. ||
+| `WEBSITE_RESOURCE_GROUP` | Read-only. Azure resource group name that contains the app resource. ||
+| `WEBSITE_OWNER_NAME` | Read-only. Contains the Azure subscription ID that owns the app, the resource group, and the webspace. ||
+| `REGION_NAME` | Read-only. Region name of the app. ||
+| `WEBSITE_PLATFORM_VERSION` | Read-only. App Service platform version. ||
+| `HOME` | Read-only. Path to the home directory (for example, `D:\home` for Windows). ||
+| `SERVER_PORT` | Read-only. The port the app should listen to. | |
+| `WEBSITE_WARMUP_PATH` | A relative path to ping to warm up the app, beginning with a slash. The default is `/`, which pings the root path. The specific path can be pinged by an unauthenticated client, such as Azure Traffic Manager, even if [App Service authentication](overview-authentication-authorization.md) is set to reject unauthenticated clients. (NOTE: This app setting does not change the path used by AlwaysOn.) ||
+| `WEBSITE_COMPUTE_MODE` | Read-only. Specifies whether app runs on dedicated (`Dedicated`) or shared (`Shared`) VM/s. ||
+| `WEBSITE_SKU` | Read-only. SKU of the app. Possible values are `Free`, `Shared`, `Basic`, and `Standard`. ||
+| `SITE_BITNESS` | Read-only. Shows whether the app is 32-bit (`x86`) or 64-bit (`AMD64`). ||
+| `WEBSITE_HOSTNAME` | Read-only. Primary hostname for the app. Custom hostnames are not accounted for here. ||
+| `WEBSITE_VOLUME_TYPE` | Read-only. Shows the storage volume type currently in use. ||
+| `WEBSITE_NPM_DEFAULT_VERSION` | Default npm version the app is using. ||
+| `WEBSOCKET_CONCURRENT_REQUEST_LIMIT` | Read-only. Limit for websocket's concurrent requests. For **Standard** tier and above, the value is `-1`, but there's still a per VM limit based on your VM size (see [Cross VM Numerical Limits](https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#cross-vm-numerical-limits)). ||
+| `WEBSITE_PRIVATE_EXTENSIONS` | Set to `0` to disable the use of private site extensions. ||
+| `WEBSITE_TIME_ZONE` | By default, the time zone for the app is always UTC. You can change it to any of the valid values that are listed in [TimeZone](/previous-versions/windows/it-pro/windows-vista/cc749073(v=ws.10)). If the specified value isn't recognized, UTC is used. | `Atlantic Standard Time` |
+| `WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG` | In the case of a storage volume failover or reconfiguration, your app is switched over to a standby storage volume. The default setting of `1` prevents your worker process from recycling when the storage infrastructure changes. If you are running a Windows Communication Foundation (WCF) app, disable it by setting it to `0`. The setting is slot-specific, so you should set it in all slots. ||
+| `WEBSITE_PROACTIVE_AUTOHEAL_ENABLED` | By default, a VM instance is proactively "autohealed" when it's using more than 90% of allocated memory for more than 30 seconds, or when 80% of the total requests in the last two minutes take longer than 200 seconds. If a VN instance has triggered one of these rules, the recovery process is an overlapping restart of the instance. Set to `false` to disable this recovery behavior. The default is `true`. For more information, see [Proactive Auto Heal](https://azure.github.io/AppService/2017/08/17/Introducing-Proactive-Auto-Heal.html). ||
+| `WEBSITE_PROACTIVE_CRASHMONITORING_ENABLED` | Whenever the w3wp.exe process on a VM instance of your app crashes due to an unhandled exception for more than three times in 24 hours, a debugger process is attached to the main worker process on that instance, and collects a memory dump when the worker process crashes again. This memory dump is then analyzed and the call stack of the thread that caused the crash is logged in your App ServiceΓÇÖs logs. Set to `false` to disable this automatic monitoring behavior. The default is `true`. For more information, see [Proactive Crash Monitoring](https://azure.github.io/AppService/2021/03/01/Proactive-Crash-Monitoring-in-Azure-App-Service.html). ||
+| `WEBSITE_DAAS_STORAGE_SASURI` | During crash monitoring (proactive or manual), the memory dumps are deleted by default. To save the memory dumps to a storage blob container, specify the SAS URI. ||
+| `WEBSITE_CRASHMONITORING_ENABLED` | Set to `true` to enable [crash monitoring](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html) manually. You must also set `WEBSITE_DAAS_STORAGE_SASURI` and `WEBSITE_CRASHMONITORING_SETTINGS`. The default is `false`. This setting has no effect if remote debugging is enabled. Also, if this setting is set to `true`, [proactive crash monitoring](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html) is disabled. ||
+| `WEBSITE_CRASHMONITORING_SETTINGS` | A JSON with the following format:`{"StartTimeUtc": "2020-02-10T08:21","MaxHours": "<elapsed-hours-from-StartTimeUtc>","MaxDumpCount": "<max-number-of-crash-dumps>"}`. Required to configure [crash monitoring](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html) if `WEBSITE_CRASHMONITORING_ENABLED` is specified. To only log the call stack without saving the crash dump in the storage account, add `,"UseStorageAccount":"false"` in the JSON. ||
+| `REMOTEDEBUGGINGVERSION` | Remote debugging version. ||
+| `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | By default, App Service creates a shared storage for you at app creation. To use a custom storage account instead, set to the connection string of your storage account. For functions, see [App settings reference for Functions](../azure-functions/functions-app-settings.md#website_contentazurefileconnectionstring). | `DefaultEndpointsProtocol=https;AccountName=<name>;AccountKey=<key>` |
+| `WEBSITE_CONTENTSHARE` | When you use specify a custom storage account with `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`, App Service creates a file share in that storage account for your app. To use a custom name, set this variable to the name you want. If a file share with the specified name doesn't exist, App Service creates it for you. | `myapp123` |
+| `WEBSITE_AUTH_ENCRYPTION_KEY` | By default, the automatically generated key is used as the encryption key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. ||
+| `WEBSITE_AUTH_SIGNING_KEY` | By default, the automatically generated key is used as the signing key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. ||
+| `WEBSITE_SCM_ALWAYS_ON_ENABLED` | Read-only. Shows whether Always On is enabled (`1`) or not (`0`). ||
+| `WEBSITE_SCM_SEPARATE_STATUS` | Read-only. Shows whether the Kudu app is running in a separate process (`1`) or not (`0`). ||
+
+<!--
+WEBSITE_PROACTIVE_STACKTRACING_ENABLED
+WEBSITE_CLOUD_NAME
+WEBSITE_MAXIMUM_CONCURRENTCOLDSTARTS
+HOME_EXPANDED
+USERPROFILE
+WEBSITE_ISOLATION
+WEBSITE_OS | only appears on windows
+WEBSITE_CLASSIC_MODE
+ -->
+
+## Variable prefixes
+
+The following table shows environment variables prefixes that App Service uses for various purposes.
+
+| Setting name | Description |
+|-|-|
+| `APPSETTING_` | Signifies that a variable is set by the customer as an app setting in the app configuration. It's injected into a .NET app as an app setting. |
+| `MAINSITE_` | Signifies a variable is specific to the app itself. |
+| `SCMSITE_` | Signifies a variable is specific to the Kudu app. |
+| `SQLCONNSTR_` | Signifies a SQL Server connection string in the app configuration. It's injected into a .NET app as a connection string. |
+| `SQLAZURECONNSTR_` | Signifies an Azure SQL Database connection string in the app configuration. It's injected into a .NET app as a connection string. |
+| `POSTGRESQLCONNSTR_` | Signifies a PostgreSQL connection string in the app configuration. It's injected into a .NET app as a connection string. |
+| `CUSTOMCONNSTR_` | Signifies a custom connection string in the app configuration. It's injected into a .NET app as a connection string. |
+| `MYSQLCONNSTR_` | Signifies an Azure SQL Database connection string in the app configuration. It's injected into a .NET app as a connection string. |
+| `AZUREFILESSTORAGE_` | A connection string to a custom Azure File storage for a container app. |
+| `AZUREBLOBSTORAGE_` | A connection string to a custom Azure Blobs storage for a container app. |
+
+## Deployment
+
+The following environment variables are related to app deployment. For variables related to App Service build automation, see [Build automation](#build-automation).
+
+| Setting name| Description |
+|-|-|
+| `WEBSITE_RUN_FROM_PACKAGE`| Set to `1` to run the app from a local ZIP package, or set to the URL of an external URL to run the app from a remote ZIP package. For more information, see [Run your app in Azure App Service directly from a ZIP package](deploy-run-package.md). |
+| `WEBSITE_USE_ZIP` | Deprecated. Use `WEBSITE_RUN_FROM_PACKAGE`. |
+| `WEBSITE_RUN_FROM_ZIP` | Deprecated. Use `WEBSITE_RUN_FROM_PACKAGE`. |
+| `WEBSITE_WEBDEPLOY_USE_SCM` | Set to `false` for WebDeploy to stop using the Kudu deployment engine. The default is `true`. To deploy to Linux apps using Visual Studio (WebDeploy/MSDeploy), set it to `false`. |
+| `MSDEPLOY_RENAME_LOCKED_FILES` | Set to `1` to attempt to rename DLLs if they can't be copied during a WebDeploy deployment. This setting is not applicable if `WEBSITE_WEBDEPLOY_USE_SCM` is set to `false`. |
+| `WEBSITE_DISABLE_SCM_SEPARATION` | By default, the main app and the Kudu app run in different sandboxes. When you stop the app, the Kudu app is still running, and you can continue to use Git deploy and MSDeploy. Each app has its own local files. Turning off this separation (setting to `false`) is a legacy mode that's no longer fully supported. |
+| `WEBSITE_ENABLE_SYNC_UPDATE_SITE` | Set to `1` ensure that REST API calls to update `site` and `siteconfig` are completely applied to all instances before returning. The default is `1` if deploying with an ARM template, to avoid race conditions with subsequent ARM calls. |
+| `WEBSITE_START_SCM_ON_SITE_CREATION` | In an ARM template deployment, set to `1` in the ARM template to pre-start the Kudu app as part of app creation. |
+| `WEBSITE_START_SCM_WITH_PRELOAD` | For Linux apps, set to `true` to force preloading the Kudu app when Always On is enabled by pinging its URL. The default is `false`. For Windows apps, the Kudu app is always preloaded. |
+
+<!--
+WEBSITE_RUN_FROM_PACKAGE_BLOB_MI_RESOURCE_ID
+-->
+
+## Build automation
+
+# [Kudu (Windows)](#tab/kudu)
+
+Kudu build configuration applies to native Windows apps and is used to control the behavior of Git-based (or ZIP-based) deployments.
+
+| Setting name| Description | Example |
+|-|-|-|
+| `SCM_BUILD_ARGS` | Add things at the end of the msbuild command line, such that it overrides any previous parts of the default command line. | To do a clean build: `-t:Clean;Compile`|
+| `SCM_SCRIPT_GENERATOR_ARGS` | Kudu uses the `azure site deploymentscript` command described [here](http://blog.amitapple.com/post/38418009331/azurewebsitecustomdeploymentpart2) to generate a deployment script. It automatically detects the language framework type and determines the parameters to pass to the command. This setting overrides the automatically generated parameters. | To treat your repository as plain content files: `--basic -p <folder-to-deploy>` |
+| `SCM_TRACE_LEVEL` | Build trace level. The default is `1`. Set to higher values, up to 4, for more tracing. | `4` |
+| `SCM_COMMAND_IDLE_TIMEOUT` | Time out in seconds for each command that the build process launches to wait before without producing any output. After that, the command is considered idle and killed. The default is `60` (one minute). In Azure, there's also a general idle request timeout that disconnects clients after 230 seconds. However, the command will still continue running server-side after that. | |
+| `SCM_LOGSTREAM_TIMEOUT` | Time out of inactivity in seconds before stopping log streaming. The default is `1800` (30 minutes).| |
+| `SCM_SITEEXTENSIONS_FEED_URL` | URL of the site extensions gallery. The default is `https://www.nuget.org/api/v2/`. The URL of the old feed is `http://www.siteextensions.net/api/v2/`. | |
+| `SCM_USE_LIBGIT2SHARP_REPOSITORY` | Set to `0` to use git.exe instead of libgit2sharp for git operations. | |
+| `WEBSITE_LOAD_USER_PROFILE` | In case of the error `The specified user does not have a valid profile.` during ASP.NET build automation (such as during Git deployment), set this variable to `1` to load a full user profile in the build environment. This setting is only applicable when `WEBSITE_COMPUTE_MODE` is `Dedicated`. | |
+| `WEBSITE_SCM_IDLE_TIMEOUT_IN_MINUTES` | Time out in minutes for the SCM (Kudu) site. The default is `20`. | |
+| `SCM_DO_BUILD_DURING_DEPLOYMENT` | With [ZIP deploy](deploy-zip.md), the deployment engine assumes that a ZIP file is ready to run as-is and doesn't run any build automation. To enable the same build automation as in [Git deploy](deploy-local-git.md), set to `true`. |
+
+<!--
+SCM_GIT_USERNAME
+SCM_GIT_EMAIL
+ -->
+
+# [Oryx (Linux)](#tab/oryx)
+
+Oryx build configuration applies to Linux apps and is used to control the behavior of Git-based (or ZIP-based) deployments. See [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
+
+--
+
+## Language-specific settings
+
+This section shows the configurable runtime settings for each supported language framework. Additional settings are available during [build automation](#build-automation) at deployment time.
+
+# [.NET](#tab/dotnet)
+
+<!--
+| DOTNET_HOSTING_OPTIMIZATION_CACHE |
+ -->
+| Setting name | Description |
+|-|-|
+| `PORT` | Read-only. For Linux apps, port that the .NET runtime listens to in the container. |
+| `WEBSITE_ROLE_INSTANCE_ID` | Read-only. ID of the current instance. |
+| `HOME` | Read-only. Directory that points to shared storage (`/home`). |
+| `DUMP_DIR` | Read-only. Directory for the crash dumps (`/home/logs/dumps`). |
+| `APP_SVC_RUN_FROM_COPY` | Linux apps only. By default, the app is run from `/home/site/wwwroot`, a shared directory for all scaled-out instances. Set this variable to `true` to copy the app to a local directory in your container and run it from there. When using this option, be sure not to hard-code any reference to `/home/site/wwwroot`. Instead, use a path relative to `/home/site/wwwroot`. |
+<!-- | `USE_DOTNET_MONITOR` | if =true then /opt/dotnetcore-tools/dotnet-monitor collect --urls "http://0.0.0.0:50051" --metrics true --metricUrls "http://0.0.0.0:50050" > 2>&1 & -->
+
+# [Java](#tab/java)
+
+| Setting name | Description | Example |
+|-|-|-|
+| `JAVA_HOME` | Path of the Java installation directory ||
+| `JAVA_OPTS` | For Java SE apps, environment variables to pass into the `java` command. Can contain system variables. For Tomcat, use `CATALINA_OPTS`. | `-Dmysysproperty=%DRIVEPATH%` |
+| `AZURE_JAVA_APP_PATH` | Environment variable can be used by custom scripts So they have a location for app.jar after it's copied to local | |
+| `SKIP_JAVA_KEYSTORE_LOAD` | value of 1 to disable App Service from loading the certificates into the key store automatically ||
+| `WEBSITE_JAVA_JAR_FILE_NAME` | The .jar file to use. Appends .jar if the string does not end in .jar. Defaults to app.jar ||
+| `WEBSITE_JAVA_WAR_FILE_NAME` | The .war file to use. Appends .war if the string does not end in .war. Defaults to app.war ||
+| `JAVA_ARGS` | java opts required by different java webserver. Defaults to `-noverify -Djava.net.preferIPv4Stack=true` ||
+| `JAVA_WEBSERVER_PORT_ENVIRONMENT_VARIABLES` | environment variables used by popular Java web frameworks for server port. Some frameworks included are: Spring, Micronaut, Grails, MicroProfile Thorntail, Helidon, Ratpack, and Quarkus ||
+| `JAVA_TMP_DIR` | Added to Java args as `-Dsite.tempdir`. Defaults to `TEMP`. ||
+| `WEBSITE_SKIP_LOCAL_COPY` | By default, the deployed app.jar is copied from `/home/site/wwwroot` to a local location. To disable this behavior and load app.jar directly from `/home/site/wwwroot`, set this variable `1` or `true`. This setting has no effect if local cache is enabled. | |
+| `TOMCAT_USE_STARTUP_BAT` | Native Windows apps only. By default, the Tomcat server is started with its `startup.bat`. To initiate using its `catalina.bat` instead, set to `0` or `False`. | `%LOCAL_EXPANDED%\tomcat` |
+| `CATALINA_OPTS` | For Tomcat apps, environment variables to pass into the `java` command. Can contain system variables. | |
+| `CATALINA_BASE` | To use a custom Tomcat installation, set to the installation's location. | |
+| `WEBSITE_JAVA_MAX_HEAP_MB` | The Java maximum heap in MB. This setting is effective only when an experimental Tomcat version is used. | |
+| `WEBSITE_DISABLE_JAVA_HEAP_CONFIGURATION` | Manually disable `WEBSITE_JAVA_MAX_HEAP_MB` by setting this variable to `true` or `1`. | |
+| `WEBSITE_AUTH_SKIP_PRINCIPAL` | By default, the following Tomcat [HttpServletRequest interface](https://tomcat.apache.org/tomcat-5.5-doc/servletapi/javax/servlet/http/HttpServletRequest.html) are hydrated when you enable the built-in [authentication](overview-authentication-authorization.md): `isSecure`, `getRemoteAddr`, `getRemoteHost`, `getScheme`, `getServerPort`. To disable it, set to `1`. | |
+| `WEBSITE_SKIP_FILTERS` | To disable all servlet filters added by App Service, set to `1`. ||
+| `IGNORE_CATALINA_BASE` | By default, App Service checks if the Tomcat variable `CATALINA_BASE` is defined. If not, it looks for the existence of `%HOME%\tomcat\conf\server.xml`. If the file exists, it sets `CATALINA_BASE` to `%HOME%\tomcat`. To disable this behavior and remove `CATALINA_BASE`, set this variable to `1` or `true`. ||
+| `PORT` | Read-only. For Linux apps, port that the Java runtime listens to in the container. | |
+| `WILDFLY_VERSION` | Read-only. For JBoss (Linux) apps, WildFly version. | |
+| `TOMCAT_VERSION` | Read-only. For Linux Tomcat apps, Tomcat version. ||
+| `JBOSS_HOME` | Read-only. For JBoss (Linux) apps, path of the WildFly installation. | |
+| `AZURE_JETTY9_CMDLINE` | Read-only. For native Windows apps, command-line arguments for starting Jetty 9. | |
+| `AZURE_JETTY9_HOME` | Read-only. For native Windows apps, path to the Jetty 9 installation.| |
+| `AZURE_JETTY93_CMDLINE` | Read-only. For native Windows apps, command-line arguments for starting Jetty 9.3. | |
+| `AZURE_JETTY93_HOME` | Read-only. For native Windows apps, path to the Jetty 9.3 installation. | |
+| `AZURE_TOMCAT7_CMDLINE` | Read-only. For native Windows apps, command-line arguments for starting Tomcat 7. | |
+| `AZURE_TOMCAT7_HOME` | Read-only. For native Windows apps, path to the Tomcat 7 installation. | |
+| `AZURE_TOMCAT8_CMDLINE` | Read-only. For native Windows apps, command-line arguments for starting Tomcat 8. | |
+| `AZURE_TOMCAT8_HOME` | Read-only. For native Windows apps, path to the Tomcat 8 installation. | |
+| `AZURE_TOMCAT85_CMDLINE` | Read-only. For native Windows apps, command-line arguments for starting Tomcat 8.5. | |
+| `AZURE_TOMCAT85_HOME` | Read-only. For native Windows apps, path to the Tomcat 8.5 installation. | |
+| `AZURE_TOMCAT90_CMDLINE` | Read-only. For native Windows apps, command-line arguments for starting Tomcat 9. | |
+| `AZURE_TOMCAT90_HOME` | Read-only. For native Windows apps, path to the Tomcat 9 installation. | |
+| `AZURE_SITE_HOME` | The value added to the Java args as `-Dsite.home`. The default is the value of `HOME`. | |
+| `HTTP_PLATFORM_PORT` | Added to Java args as `-Dport.http`. The following environment variables used by different Java web frameworks are also set to this value: `SERVER_PORT`, `MICRONAUT_SERVER_PORT`, `THORNTAIL_HTTP_PORT`, `RATPACK_PORT`, `QUARKUS_HTTP_PORT`, `PAYARAMICRO_PORT`. ||
+| `AZURE_LOGGING_DIR` | Native Windows apps only. Added to Java args as `-Dsite.logdir`. The default is `%HOME%\LogFiles\`. ||
+
+<!--
+WEBSITE_JAVA_COPY_ALL
+AZURE_SITE_APP_BASE
+ -->
+
+# [Node.js](#tab/node)
+
+| Setting name | Description |
+|-|-|
+| `PORT` | Read-only. For Linux apps, port that the Node.js app listens to in the container. |
+| `WEBSITE_ROLE_INSTANCE_ID` | Read-only. ID of the current instance. |
+| `PM2HOME` | |
+| `WEBSITE_NODE_DEFAULT_VERSION` | For native Windows apps, default node version the app is using. Any of the [supported Node.js versions](configure-language-nodejs.md#show-nodejs-version) can be used here. |
+
+<!-- APPSVC_REMOTE_DEBUGGING
+APPSVC_REMOTE_DEBUGGING_BREAK
+APPSVC_TUNNEL_PORT -->
+
+# [Python](#tab/python)
+
+| Setting name | Description |
+|-|-|
+| `APPSVC_VIRTUAL_ENV` | Read-only. |
+| `PORT` | Read-only. For Linux apps, port that the Python app listens to in the container. |
+
+<!-- APPSVC_REMOTE_DEBUGGING
+APPSVC_TUNNEL_PORT | -debugAdapter ptvsd -debugPort $APPSVC_TUNNEL_PORT"
+APPSVC_REMOTE_DEBUGGING_BREAK | debugArgs+=" -debugWait" -->
+
+# [PHP](#tab/php)
+
+| Setting name | Description | Example|
+|-|-|-|
+| `PHP_Extensions` | Comma-separated list of PHP extensions. | `extension1.dll,extension2.dll,Name1=value1` |
+| `PHP_ZENDEXTENSIONS` | For Windows native apps, set to the path of the XDebug extension, such as `D:\devtools\xdebug\2.6.0\php_7.2\php_xdebug-2.6.0-7.2-vc15-nts.dll`. For Linux apps, set to `xdebug` to use the XDebug version of the PHP container. ||
+| `PHP_VERSION` | Read-only. The selected PHP version. ||
+| `PORT` | Read-only. Port that Apache server listens to in the container. ||
+| `WEBSITE_ROLE_INSTANCE_ID` | Read-only. ID of the current instance. ||
+| `WEBSITE_PROFILER_ENABLE_TRIGGER` | Set to `TRUE` to add `xdebug.profiler_enable_trigger=1` and `xdebug.profiler_enable=0` to the default `php.ini`. ||
+| `WEBSITE_ENABLE_PHP_ACCESS_LOGS` | Set to `TRUE` to log requests to the server (`CustomLog \dev\stderr combined` is added to `/etc/apache2/apache2.conf`). ||
+| `APACHE_SERVER_LIMIT` | Apache specific variable. The default is `1000`. ||
+| `APACHE_MAX_REQ_WORKERS` | Apache specific variable. The default is `256`. ||
+
+<!--
+ZEND_BIN_PATH
+MEMCACHESHIM_REDIS_ENABLE
+MEMCACHESHIM_PORT
+APACHE_LOG_DIR | RUN sed -i 's!ErrorLog ${APACHE_LOG_DIR}/error.log!ErrorLog /dev/stderr!g' /etc/apache2/apache2.conf
+APACHE_RUN_USER | RUN sed -i 's!User ${APACHE_RUN_USER}!User www-data!g' /etc/apache2/apache2.conf
+APACHE_RUN_GROUP | RUN sed -i 's!User ${APACHE_RUN_GROUP}!Group www-data!g' /etc/apache2/apache2.conf
+-->
+
+# [Ruby](#tab/ruby)
+
+| Setting name | Description | Example |
+|-|-|-|
+| `PORT` | Read-only. Port that the Rails app listens to in the container. ||
+| `WEBSITE_ROLE_INSTANCE_ID` | Read-only. ID of the current instance. ||
+| `RAILS_IGNORE_SPLASH` | By default, a default splash page is displayed when no Gemfile is found. Set this variable to any value to disable the splash page. ||
+| `BUNDLE_WITHOUT` | To add `--without` options to `bundle install`, set the variable to the groups you want to exclude, separated by space. By default, all Gems are installed. | `test development` |
+| `BUNDLE_INSTALL_LOCATION` | Directory to install gems. The default is `/tmp/bundle`. ||
+| `RUBY_SITE_CONFIG_DIR` | Site config directory. The default is `/home/site/config`. The container checks for zipped gems in this directory. ||
+| `SECRET_KEY_BASE` | By default, A random secret key base is generated. To use a custom secret key base, set this variable to the desired key base. ||
+| `RAILS_ENV` | Rails environment. The default is `production`. ||
+| `GEM_PRISTINE` | Set this variable to any value to run `gem pristine --all`. ||
+
+--
+
+## Domain and DNS
+
+| Setting name| Description | Example |
+|-|-|-|
+| `WEBSITE_DNS_SERVER` | IP address of primary DNS server for outgoing connections (such as to a back-end service). The default DNS server for App Service is Azure DNS, whose IP address is `168.63.129.16`. If your app uses [VNet integration](web-sites-integrate-with-vnet.md) or is in an [App Service environment](environment/intro.md), it inherits the DNS server configuration from the VNet by default. | `10.0.0.1` |
+| `WEBSITE_DNS_ALT_SERVER` | IP address of fallback DNS server for outgoing connections. See `WEBSITE_DNS_SERVER`. | |
+
+<!--
+DOMAIN_OWNERSHIP_VERIFICATION_IDENTIFIERS
+ -->
+
+## TSL/SSL
+
+For more information, see [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md).
+
+| Setting name| Description |
+|-|-|
+| `WEBSITE_LOAD_CERTIFICATES` | Comma-separate thumbprint values to the certificate you want to load in your code, or `*` to allow all certificates to be loaded in code. Only [certificates added to your app](configure-ssl-certificate.md) can be loaded. |
+| `WEBSITE_PRIVATE_CERTS_PATH` | Read-only. Path in a Windows container to the loaded private certificates. |
+| `WEBSITE_PUBLIC_CERTS_PATH` | Read-only. Path in a Windows container to the loaded public certificates. |
+| `WEBSITE_INTERMEDIATE_CERTS_PATH` | Read-only. Path in a Windows container to the loaded intermediate certificates. |
+| `WEBSITE_ROOT_CERTS_PATH` | Read-only. Path in a Windows container to the loaded root certificates. |
+
+## Deployment slots
+
+For more information on deployment slots, see [Set up staging environments in Azure App Service](deploy-staging-slots.md).
+
+| Setting name| Description | Example |
+|-|-|-|
+|`WEBSITE_SLOT_NAME`| Read-only. Name of the current deployment slot. The name of the production slot is `Production`. ||
+|`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS`| By default, the versions for site extensions are specific to each slot. This prevents unanticipated application behavior due to changing extension versions after a swap. If you want the extension versions to swap as well, set to `1` on *all slots*. ||
+|`WEBSITE_OVERRIDE_PRESERVE_DEFAULT_STICKY_SLOT_SETTINGS`| Designates certain settings as [sticky or not swappable by default](deploy-staging-slots.md#which-settings-are-swapped). Default is `true`. Set this setting to `false` or `0` for *all deployment slots* to make them swappable instead. There's no fine-grain control for specific setting types. ||
+|`WEBSITE_SWAP_WARMUP_PING_PATH`| Path to ping to warm up the target slot in a swap, beginning with a slash. The default is `/`, which pings the root path. | `/statuscheck` |
+|`WEBSITE_SWAP_WARMUP_PING_STATUSES`| Valid HTTP response codes for the warm-up operation during a swap. If the returned status code isn't in the list, the warmup and swap operations are stopped. By default, all response codes are valid. | `200,202` |
+| `WEBSITE_SLOT_NUMBER_OF_TIMEOUTS_BEFORE_RESTART` | During a slot swap, maximum number of timeouts after which we force restart the site on a specific VM instance. The default is `3`. ||
+| `WEBSITE_SLOT_MAX_NUMBER_OF_TIMEOUTS` | During a slot swap, maximum number of timeout requests for a single URL to make before giving up. The default is `5`. ||
+| `WEBSITE_SKIP_ALL_BINDINGS_IN_APPHOST_CONFIG` | Set to `true` or `1` to skip all bindings in `applicationHost.config`. The default is `false`. If your app triggers a restart because `applicationHost.config` is updated with the swapped hostnames of th slots, set this variable to `true` to avoid a restart of this kind. If you are running a Windows Communication Foundation (WCF) app, do not set this variable. ||
+
+<!--
+|`WEBSITE_SWAP_SLOTNAME`|||
+-->
+
+## Custom containers
+
+For more information on custom containers, see [Run a custom container in Azure](quickstart-custom-container.md).
+
+| Setting name| Description | Example |
+|-|-|-|
+| `WEBSITES_ENABLE_APP_SERVICE_STORAGE` | Set to `true` to enable the `/home` directory to be shared across scaled instances. The default is `false` for custom containers. ||
+| `WEBSITES_CONTAINER_START_TIME_LIMIT` | Amount of time in seconds to wait for the container to complete start-up before restarting the container. Default is `230`. You can increase it up to the maximum of `1800`. ||
+| `DOCKER_REGISTRY_SERVER_URL` | URL of the registry server, when running a custom container in App Service. For security, this variable is not passed on to the container. | `https://<server-name>.azurecr.io` |
+| `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. ||
+| `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. ||
+| `WEBSITES_WEB_CONTAINER_NAME` | In a Docker Compose app, only one of the containers can be internet accessible. Set to the name of the container defined in the configuration file to override the default container selection. By default, the internet accessible container is the first container to define port 80 or 8080, or, when no such container is found, the first container defined in the configuration file. | |
+| `WEBSITES_PORT` | For a custom container, the custom port number on the container to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. ||
+| `WEBSITE_CPU_CORES_LIMIT` | By default, a Windows container runs with all available cores for your chosen pricing tier. To reduce the number of cores, set to the number of desired cores limit. For more information, see [Customize the number of compute cores](configure-custom-container.md?pivots=container-windows#customize-the-number-of-compute-cores).||
+| `WEBSITE_MEMORY_LIMIT_MB` | By default all Windows Containers deployed in Azure App Service are limited to 1 GB RAM. Set to the desired memory limit in MB. The cumulative total of this setting across apps in the same plan must not exceed the amount allowed by the chosen pricing tier. For more information, see [Customize container memory](configure-custom-container.md?pivots=container-windows#customize-container-memory). ||
+| `MACHINEKEY_Decryption` | For Windows containers, this variable is injected into the container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the default `decryption` value, set it as an app setting. ||
+| `MACHINEKEY_DecryptionKey` | For Windows containers, this variable is injected into the container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the automatically generated `decryptionKey` value, set it as an app setting. ||
+| `MACHINEKEY_Validation` | For Windows containers, this variable is injected into the container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the default `validation` value, set it as an app setting. ||
+| `MACHINEKEY_ValidationKey` | For Windows containers, this variable is injected into the container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the automatically generated `validationKey` value, set it as an app setting. ||
+| `CONTAINER_WINRM_ENABLED` | For a Windows container app, set to `1` to enable Windows Remote Management (WIN-RM). ||
+
+<!--
+CONTAINER_ENCRYPTION_KEY
+CONTAINER_NAME
+CONTAINER_IMAGE_URL
+AzureWebEncryptionKey
+CONTAINER_START_CONTEXT_SAS_URI
+CONTAINER_AZURE_FILES_VOLUME_MOUNT_PATH
+CONTAINER_START_CONTEXT
+DOCKER_ENABLE_CI
+WEBSITE_DISABLE_PRELOAD_HANG_MITIGATION
+ -->
+
+## Scaling
+
+| Setting name| Description |
+|-|-|
+| `WEBSITE_INSTANCE_ID` | Read-only. Unique ID of the current VM instance, when the app is scaled out to multiple instances. |
+| `WEBSITE_IIS_SITE_NAME` | Deprecated. Use `WEBSITE_INSTANCE_ID`. |
+| `WEBSITE_DISABLE_OVERLAPPED_RECYCLING` | Overlapped recycling makes it so that before the current VM instance of an app is shut down, a new VM instance starts. In some cases, it can cause file locking issues. You can try turning it off by setting to `1`. |
+| `WEBSITE_DISABLE_CROSS_STAMP_SCALE` | By default, apps are allowed to scale across stamps if they use Azure Files or a Docker container. Set to `1` or `true` to disable cross-stamp scaling within the app's region. The default is `0`. Custom Docker containers that set `WEBSITES_ENABLE_APP_SERVICE_STORAGE` to `true` or `1` cannot scale cross-stamps because their content is not completely encapsulated in the Docker container. |
+
+## Logging
+
+| Setting name| Description | Example |
+|-|-|-|
+| `WEBSITE_HTTPLOGGING_ENABLED` | Read-only. Shows whether the web server logging for Windows native apps is enabled (`1`) or not (`0`). ||
+| `WEBSITE_HTTPLOGGING_RETENTION_DAYS` | Retention period in days of web server logs for Windows native apps, if web server logs are enabled. | `10` |
+| `WEBSITE_HTTPLOGGING_CONTAINER_URL` | SAS URL of the blob storage container to store web server logs for Windows native apps, if web server logs are enabled. If not set, web server logs are stored in the app's file system (default shared storage). | |
+| `DIAGNOSTICS_AZUREBLOBRETENTIONINDAYS` | Retention period in days of application logs for Windows native apps, if application logs are enabled. | `10` |
+| `DIAGNOSTICS_AZUREBLOBCONTAINERSASURL` | SAS URL of the blob storage container to store application logs for Windows native apps, if application logs are enabled. | |
+| `APPSERVICEAPPLOGS_TRACE_LEVEL` | Minimum log level to ship to Log Analytics for the [AppServiceAppLogs](troubleshoot-diagnostic-logs.md#supported-log-types) log type. | |
+| `DIAGNOSTICS_LASTRESORTFILE` | The filename to create, or a relative path to the log directory, for logging internal errors for troubleshooting the listener. The default is `logging-errors.txt`. ||
+| `DIAGNOSTICS_LOGGINGSETTINGSFILE` | Path to the log settings file, relative to `D:\home` or `/home`. The default is `site\diagnostics\settings.json`. | |
+| `DIAGNOSTICS_TEXTTRACELOGDIRECTORY` | The log folder, relative to the app root (`D:\home\site\wwwroot` or `/home/site/wwwroot`). | `..\..\LogFiles\Application`|
+| `DIAGNOSTICS_TEXTTRACEMAXLOGFILESIZEBYTES` | Maximum size of the log file in bytes. The default is `131072` (128 KB). ||
+| `DIAGNOSTICS_TEXTTRACEMAXLOGFOLDERSIZEBYTES` | Maximum size of the log folder in bytes. The default is `1048576` (1 MB). ||
+| `DIAGNOSTICS_TEXTTRACEMAXNUMLOGFILES` | Maximum number of log files to keep. The default is `20`. | |
+| `DIAGNOSTICS_TEXTTRACETURNOFFPERIOD` | Time out in milliseconds to keep application logging enabled. The default is `43200000` (12 hours). ||
+| `WEBSITE_LOG_BUFFERING` | By default, log buffering is enabled. Set to `0` to disable it. ||
+| `WEBSITE_ENABLE_PERF_MODE` | For native Windows apps, set to `TRUE` to turn off IIS log entries for successful requests returned within 10 seconds. This is a quick way to do performance benchmarking by removing extended logging. ||
+
+<!--
+| `DIAGNOSTICS_AZURETABLESASURL` | old? | |
+| WEBSITE_APPSERVICEAPPLOGS_TRACE_ENABLED | Read-only. Added when | |
+| AZMON_LOG_CATEGORY_APPSERVICEAPPLOGS_ENABLED | Read-only. Shows when the AppServiceAppLogs category in Azure Monitor settings is enabled. |
+AZMON_LOG_CATEGORY_APPSERVICEPLATFORMLOGS_ENABLED | Read-only. Shows when the AppServiceAppLogs category in Azure Monitor settings is enabled. |
+AZMON_LOG_CATEGORY_APPSERVICECONSOLELOGS_ENABLED | Read-only. Shows when the AppServiceConsoleLogs category in Azure Monitor settings is enabled. |
+WEBSITE_FUNCTIONS_AZUREMONITOR_CATEGORIES
+WINDOWS_TRACING_FLAGS
+WINDOWS_TRACING_LOGFILE
+WEBSITE_FREB_DISABLE
+WEBSITE_ARR_SESSION_AFFINITY_DISABLE
+
+-->
+
+## Performance counters
+
+The following are 'fake' environment variables that don't exist if you enumerate them, but return their value if you look them up individually. The value is dynamic and can change on every lookup.
+
+| Setting name| Description |
+|-|-|
+| `WEBSITE_COUNTERS_ASPNET` | A JSON object containing the ASP.NET perf counters. |
+| `WEBSITE_COUNTERS_APP` | A JSON object containing sandbox counters. |
+| `WEBSITE_COUNTERS_CLR` | A JSON object containing CLR counters. |
+| `WEBSITE_COUNTERS_ALL` | A JSON object containing the combination of the other three variables. |
+
+## Caching
+
+| Setting name| Description |
+|-|-|
+| `WEBSITE_LOCAL_CACHE_OPTION` | Whether local cache is enabled. Available options are: <br/>- `Default`: Inherit the stamp-level global setting.<br/>- `Always`: Enable for the app.<br/>- OnStorageUnavailability<br/>- `Disabled`: Disabled for the app. |
+| `WEBSITE_LOCAL_CACHE_READWRITE_OPTION` | Read-write options of the local cache. Available options are: <br/>- `ReadOnly`: Cache is read-only.<br/>- `WriteWithCopyBack`: Allow writes to local cache and copy periodically to shared storage. Applicable only for single instance apps as the SCM site points to local cache.<br/>- `WriteButDiscardChanges`: Allow writes to local cache but discard changes made locally. |
+| `WEBSITE_LOCAL_CACHE_SIZEINMB` | Size of the local cache in MB. Default is `1000` (1 GB). |
+| `WEBSITE_LOCALCACHE_READY` | Read-only flag indicating if the app using local cache. |
+| `WEBSITE_DYNAMIC_CACHE` | Due to network file shared nature to allow access for multiple instances, the dynamic cache improves performance by caching the recently accessed files locally on an instance. Cache is invalidated when file is modified. The cache location is `%SYSTEMDRIVE%\local\DynamicCache` (same `%SYSTEMDRIVE%\local` quota is applied). By default, full content caching is enabled (set to `1`), which includes both file content and directory/file metadata (timestamps, size, directory content). To conserve local disk use, set to `2` to cache only directory/file metadata (timestamps, size, directory content). To turn off caching, set to `0`. |
+| `WEBSITE_READONLY_APP` | When using dynamic cache, you can disable write access to the app root (`D:\home\site\wwwroot` or `/home/site/wwwroot`) by setting this variable to `1`. Except for the `App_Data` directory, no exclusive locks are allowed, so that deployments don't get blocked by locked files. |
+
+<!--
+HTTP_X_LOCALCACHE_READY_CHECK
+HTTP_X_APPINIT_CHECK
+X_SERVER_ROUTED
+HTTP_X_MS_REQUEST_ID
+HTTP_X_MS_APIM_HOST
+HTTP_X_MS_FORWARD_HOSTNAMES
+HTTP_X_MS_FORWARD_TOKEN
+HTTP_MWH_SECURITYTOKEN
+IS_SERVICE_ENDPOINT
+VNET_CLIENT_IP
+HTTP_X_MS_SITE_RESTRICTED_TOKEN
+HTTP_X_FROM
+| LOCAL_ADDR | internal private IP address of app |
+SERVERS_FOR_HOSTING_SERVER_PROVIDER
+NEED_PLATFORM_AUTHORIZATION
+TIP_VALUE
+TIP_RULE_NAME
+TIP_RULE_MAX_AGE
+REWRITE_PATH
+NEGOTIATE_CLIENT_CERT
+| CLIENT_CERT_MODE | used with ClientCertEnabled. Required means ClientCert is required. Optional means ClientCert is optional or accepted. OptionalInteractiveUser is similar to Optional; however, it will not ask user for Certificate in Browser Interactive scenario.|
+| HTTPS_ONLY | set with terraform? |
+ -->
+
+## Networking
+
+The following environment variables are related to [hybrid connections](app-service-hybrid-connections.md) and [VNET integration](web-sites-integrate-with-vnet.md).
+
+| Setting name | Description |
+|-|-|
+| `WEBSITE_RELAYS` | Read-only. Data needed to configure the Hybrid Connection, including endpoints and service bus data. |
+| `WEBSITE_REWRITE_TABLE` | Read-only. Used at runtime to do the lookups and rewrite connections appropriately. |
+| `WEBSITE_VNET_ROUTE_ALL` | By default, if you use [regional VNet Integration](web-sites-integrate-with-vnet.md#regional-vnet-integration), your app only routes RFC1918 traffic into your VNet. Set to `1` to route all outbound traffic into your VNet and be subject to the same NSGs and UDRs. The setting lets you access non-RFC1918 endpoints through your VNet, secure all outbound traffic leaving your app, and force tunnel all outbound traffic to a network appliance of your own choosing. |
+| `WEBSITE_PRIVATE_IP` | Read-only. IP address associated with the app when [integrated with a VNet](web-sites-integrate-with-vnet.md). For Regional VNet Integration, the value is an IP from the address range of the delegated subnet, and for Gateway-required VNet Integration, the value is an IP from the address range of the point-to-site address pool configured on the Virtual Network Gateway. This IP is used by the app to connect to the resources through the VNet. Also, it can change within the described address range. |
+| `WEBSITE_PRIVATE_PORTS` | Read-only. In VNet integration, shows which ports are useable by the app to communicate with other nodes. |
+
+<!-- | WEBSITE_SLOT_POLL_WORKER_FOR_CHANGE_NOTIFICATION | Poll worker before pinging the site to detect when change notification has been processed. |
+WEBSITE_SPECIAL_CACHE
+WEBSITE_SOCKET_STATISTICS_ENABLED
+| `WEBSITE_ENABLE_NETWORK_HEALTHCHECK` | Enable network health checks that won't be blocked by CORS or built-in authentication. Three check methods can be utilized: <br/>- Ping an IP address (configurable by `WEBSITE_NETWORK_HEALTH_IPADDRS`). <br/>- Check DNS resolution (configurable by `WEBSITE_NETWORK_HEALTH_DNS_ENDPOINTS`). <br/>- Poll URI endpoints (configurable by `WEBSITE_NETWORK_HEALTH_URI_ENDPOINTS`).<br/> |
+| `WEBSITE_NETWORK_HEALTH_IPADDRS` | https://msazure.visualstudio.com/One/_git/AAPT-Antares-EasyAuth/pullrequest/3763264 |
+| `WEBSITE_NETWORK_HEALTH_DNS_ENDPOINTS` | |
+| `WEBSITE_NETWORK_HEALTH_URI_ENDPOINTS` | |
+| `WEBSITE_NETWORK_HEALTH_INTEVALSECS` | Interval of the network health check in seconds. The minimum value is 30 seconds. | |
+| `WEBSITE_NETWORK_HEALTH_TIMEOUT_INTEVALSECS` | Time out of the network health check in seconds. | |
+
+-->
+<!-- | CONTAINER_WINRM_USERNAME |
+| CONTAINER_WINRM_PASSWORD| -->
+
+## Key vault references
+
+The following environment variables are related to [key vault references](app-service-key-vault-references.md).
+
+| Setting name | Description |
+|-|-|
+| `WEBSITE_KEYVAULT_REFERENCES` | Read-only. Contains information (including statuses) for all Key Vault references that are currently configured in the app. |
+| `WEBSITE_SKIP_CONTENTSHARE_VALIDATION` | If you set the shared storage connection of your app (using `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`) to a Key Vault reference, the app cannot resolve the key vault reference at app creation or update if one of the following conditions is true: <br/>- The app accesses the key vault with a system-assigned identity.<br/>- The app accesses the key vault with a user-assigned identity, and the key vault is [locked with a VNet](../key-vault/general/overview-vnet-service-endpoints.md).<br/>To avoid errors at create or update time, set this variable to `1`. |
+| `WEBSITE_DELAY_CERT_DELETION` | This env var can be set to 1 by users in order to ensure that a certificate that a worker process is dependent upon is not deleted until it exits. |
+<!-- | `WEBSITE_ALLOW_DOUBLE_ESCAPING_URL` | TODO | -->
+
+## CORS
+
+The following environment variables are related to Cross-Origin Resource Sharing (CORS) configuration.
+
+| Setting name | Description |
+|-|-|
+| `WEBSITE_CORS_ALLOWED_ORIGINS` | Read-only. Shows the allowed origins for CORS. |
+| `WEBSITE_CORS_SUPPORT_CREDENTIALS` | Read-only. Shows whether setting the `Access-Control-Allow-Credentials` header to `true` is enabled (`True`) or not (`False`). |
+
+## Authentication & Authorization
+
+The following environment variables are related to [App Service authentication](overview-authentication-authorization.md).
+
+| Setting name| Description|
+|-|-|
+| `WEBSITE_AUTH_DISABLE_IDENTITY_FLOW` | When set to `true`, disables assigning the thread principal identity in ASP.NET-based web applications (including v1 Function Apps). This is designed to allow developers to protect access to their site with auth, but still have it use a separate login mechanism within their app logic. The default is `false`. |
+| `WEBSITE_AUTH_HIDE_DEPRECATED_SID` | `true` or `false`. The default value is `false`. This is a setting for the legacy Azure Mobile Apps integration for Azure App Service. Setting this to `true` resolves an issue where the SID (security ID) generated for authenticated users might change if the user changes their profile information. Changing this value may result in existing Azure Mobile Apps user IDs changing. Most apps do not need to use this setting. |
+| `WEBSITE_AUTH_NONCE_DURATION`| A _timespan_ value in the form `_hours_:_minutes_:_seconds_`. The default value is `00:05:00`, or 5 minutes. This setting controls the lifetime of the [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) generated for all browser-driven logins. If a login fails to complete in the specified time, the login flow will be retried automatically. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.nonceExpirationInterval` configuration value. |
+| `WEBSITE_AUTH_PRESERVE_URL_FRAGMENT` | When set to `true` and users click on app links that contain URL fragments, the login process will ensure that the URL fragment part of your URL does not get lost in the login redirect process. For more information, see [Customize sign-in and sign-out in Azure App Service authentication](configure-authentication-customize-sign-in-out.md#preserve-url-fragments). |
+| `WEBSITE_AUTH_USE_LEGACY_CLAIMS` | To maintain backward compatibility across upgrades, the authentication module uses the legacy claims mapping of short to long names in the `/.auth/me` API, so certain mappings are excluded (e.g. "roles"). To get the more modern version of the claims mappings, set this variable to `False`. In the "roles" example, it would be mapped to the long claim name "http://schemas.microsoft.com/ws/2008/06/identity/claims/role". |
+| `WEBSITE_AUTH_DISABLE_WWWAUTHENTICATE` | `true` or `false`. The default value is `false`. When set to `true`, removes the [`WWW-Authenticate`](https://developer.mozilla.org/docs/Web/HTTP/Headers/WWW-Authenticate) HTTP response header from module-generated HTTP 401 responses. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `identityProviders.azureActiveDirectory.login.disableWwwAuthenticate` configuration value. |
+| `WEBSITE_AUTH_STATE_DIRECTORY` | A local file system directory path where tokens are stored when the file-based token store is enabled. The default value is `%HOME%\Data\.auth`. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.tokenStore.fileSystem.directory` configuration value. |
+| `WEBSITE_AUTH_TOKEN_CONTAINER_SASURL` | A fully qualified blob container URL. Instructs the auth module to store and load all encrypted tokens to the specified blob storage container instead of using the default local file system. |
+| `WEBSITE_AUTH_TOKEN_REFRESH_HOURS` | Any positive decimal number. The default value is `72` (hours). This setting controls the amount of time after a session token expires that the `/.auth/refresh` API can be used to refresh it. It is intended primarily for use with Azure Mobile Apps, which rely on session tokens. Refresh attempts after this period will fail and end-users will be required to sign-in again. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.tokenStore.tokenRefreshExtensionHours` configuration value. |
+| `WEBSITE_AUTH_TRACE_LEVEL`| Controls the verbosity of authentication traces written to [Application Logging](troubleshoot-diagnostic-logs.md#enable-application-logging-windows). Valid values are `Off`, `Error`, `Warning`, `Information`, and `Verbose`. The default value is `Verbose`. |
+| `WEBSITE_AUTH_VALIDATE_NONCE`| `true` or `false`. The default value is `true`. This value should never be set to `false` except when temporarily debugging [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) validation failures that occur during interactive logins. This application setting is intend for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.validateNonce` configuration value. |
+| `WEBSITE_AUTH_V2_CONFIG_JSON` | This environment variable is populated automatically by the Azure App Service platform and is used to configure the integrated authentication module. The value of this environment variable corresponds to the V2 (non-classic) authentication configuration for the current app in Azure Resource Manager. It's not intended to be configured explicitly. |
+| `WEBSITE_AUTH_ENABLED` | Read-only. Injected into a Windows or Linux app to indicate whether App Service authentication is enabled. |
+
+<!-- System settings
+WEBSITE_AUTH_RUNTIME_VERSION
+WEBSITE_AUTH_API_PREFIX
+WEBSITE_AUTH_TOKEN_STORE
+WEBSITE_AUTH_MOBILE_COMPAT
+WEBSITE_AUTH_AAD_BYPASS_SINGLE_TENANCY_CHECK
+WEBSITE_AUTH_COOKIE_EXPIRATION_TIME
+WEBSITE_AUTH_COOKIE_EXPIRATION_MODE
+WEBSITE_AUTH_PROXY_HEADER_CONVENTION
+WEBSITE_AUTH_PROXY_HOST_HEADER
+WEBSITE_AUTH_PROXY_PROTO_HEADER
+WEBSITE_AUTH_REQUIRE_HTTPS
+WEBSITE_AUTH_DEFAULT_PROVIDER
+WEBSITE_AUTH_UNAUTHENTICATED_ACTION
+WEBSITE_AUTH_EXTERNAL_REDIRECT_URLS
+WEBSITE_AUTH_CUSTOM_IDPS
+WEBSITE_AUTH_CUSTOM_AUTHZ_SETTINGS
+WEBSITE_AUTH_CLIENT_ID
+WEBSITE_AUTH_CLIENT_SECRET
+WEBSITE_AUTH_CLIENT_SECRET_SETTING_NAME
+WEBSITE_AUTH_CLIENT_SECRET_CERTIFICATE_THUMBPRINT
+WEBSITE_AUTH_OPENID_ISSUER
+WEBSITE_AUTH_ALLOWED_AUDIENCES
+WEBSITE_AUTH_LOGIN_PARAMS
+WEBSITE_AUTH_AUTO_AAD
+WEBSITE_AUTH_AAD_CLAIMS_AUTHORIZATION
+WEBSITE_AUTH_GOOGLE_CLIENT_ID
+WEBSITE_AUTH_GOOGLE_CLIENT_SECRET
+WEBSITE_AUTH_GOOGLE_CLIENT_SECRET_SETTING_NAME
+WEBSITE_AUTH_GOOGLE_SCOPE
+WEBSITE_AUTH_FB_APP_ID
+WEBSITE_AUTH_FB_APP_SECRET
+WEBSITE_AUTH_FB_APP_SECRET_SETTING_NAME
+WEBSITE_AUTH_FB_SCOPE
+WEBSITE_AUTH_GITHUB_CLIENT_ID
+WEBSITE_AUTH_GITHUB_CLIENT_SECRET
+WEBSITE_AUTH_GITHUB_CLIENT_SECRET_SETTING_NAME
+WEBSITE_AUTH_GITHUB_APP_SCOPE
+WEBSITE_AUTH_TWITTER_CONSUMER_KEY
+WEBSITE_AUTH_TWITTER_CONSUMER_SECRET
+WEBSITE_AUTH_TWITTER_CONSUMER_SECRET_SETTING_NAME
+WEBSITE_AUTH_MSA_CLIENT_ID
+WEBSITE_AUTH_MSA_CLIENT_SECRET
+WEBSITE_AUTH_MSA_CLIENT_SECRET_SETTING_NAME
+WEBSITE_AUTH_MSA_SCOPE
+WEBSITE_AUTH_FROM_FILE
+WEBSITE_AUTH_FILE_PATH
+| `WEBSITE_AUTH_CONFIG_DIR` | (Used for the deprecated "routes" feature) |
+| `WEBSITE_AUTH_ZUMO_USE_TOKEN_STORE_CLAIMS` | (looks like only a tactical fix) ||
+ -->
+
+## Managed identity
+
+The following environment variables are related to [managed identities](overview-managed-identity.md).
+
+|Setting name | Description |
+|-|-|
+|`IDENTITY_ENDPOINT` | Read-only. The URL to retrieve the token for the app's [managed identity](overview-managed-identity.md). |
+| `MSI_ENDPOINT` | Deprecated. Use `IDENTITY_ENDPOINT`. |
+| `IDENTITY_HEADER` | Read-only. Value that must be added to the `X-IDENTITY-HEADER` header when making an HTTP GET request to `IDENTITY_ENDPOINT`. The value is rotated by the platform. |
+| `MSI_SECRET` | Deprecated. Use `IDENTITY_HEADER`. |
+<!-- | `WEBSITE_AUTHENTICATION_ENDPOINT_ENABLED` | Disabled by default? TODO | -->
+
+## Health check
+
+The following environment variables are related to [health checks](monitor-instances-health-check.md).
+
+| Setting name | Description |
+|-|-|
+| `WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | The maximum number of failed pings before removing the instance. Set to a value between `2` and `100`. When you are scaling up or out, App Service pings the Health check path to ensure new instances are ready. For more information, see [Health check](monitor-instances-health-check.md).|
+| `WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | To avoid overwhelming healthy instances, no more than half of the instances will be excluded. For example, if an App Service Plan is scaled to four instances and three are unhealthy, at most two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. To override this behavior, set to a value between `0` and `100`. A higher value means more unhealthy instances will be removed. The default is `50` (50%). |
+
+## Push notifications
+
+The following environment variables are related to the [push notifications](/previous-versions/azure/app-service-mobile/app-service-mobile-xamarin-forms-get-started-push.md#configure-hub) feature.
+
+| Setting name | Description |
+|-|-|
+| `WEBSITE_PUSH_ENABLED` | Read-only. Added when push notifications are enabled. |
+| `WEBSITE_PUSH_TAG_WHITELIST` | Read-only. Contains the tags in the notification registration. |
+| `WEBSITE_PUSH_TAGS_REQUIRING_AUTH` | Read-only. Contains a list of tags in the notification registration that requires user authentication. |
+| `WEBSITE_PUSH_TAGS_DYNAMIC` | Read-only. Contains a list of tags in the notification registration that were added automatically. |
+
+<!--
+## WellKnownAppSettings
+
+WEBSITE_ALWAYS_PERFORM_PRELOAD
+| WEBSITE_DISABLE_PRIMARY_VOLUMES | Set to `true` to disable the primary storage volume for that app. The default is `false`. |
+| WEBSITE_DISABLE_STANDBY_VOLUMES | Set to `true` to disable the stand-by storage volume for that app. The default is `false`. This setting has no effect if `WEBSITE_DISABLE_PRIMARY_VOLUMES` is `true`. |
+WEBSITE_FAILOVER_ONLY_ON_SBX_NW_FAILURES
+WEBSITE_ENABLE_SYSTEM_LOG
+WEBSITE_FRAMEWORK_JIT
+WEBSITE_ADMIN_SITEID
+WEBSITE_STAMP_DEPLOYMENT_ID
+| WEBSITE_DEPLOYMENT_ID | Read-only. internal ID of deployment slot |
+| WEBSITE_DISABLE_MSI | deprecated |
+WEBSITE_VNET_BLOCK_FOR_SETUP_MAIN_SITE
+WEBSITE_VNET_BLOCK_FOR_SETUP_SCM_SITE
+
+ -->
+
+## Webjobs
+
+The following environment variables are related to [WebJobs](webjobs-create.md).
+
+| Setting name| Description |
+|-|-|
+| `WEBJOBS_RESTART_TIME`|For continuous jobs, delay in seconds when a job's process goes down for any reason before relaunching it. |
+| `WEBJOBS_IDLE_TIMEOUT`| For triggered jobs, timeout in seconds, after which the job is aborted if it's in idle, has no CPU time or output. |
+| `WEBJOBS_HISTORY_SIZE`| For triggered jobs, maximum number of runs kept in the history directory per job. The default is `50`. |
+| `WEBJOBS_STOPPED`| Set to `1` to disable running any job, and stop all currently running jobs. |
+| `WEBJOBS_DISABLE_SCHEDULE`| Set to `1` to turn off all scheduled triggering. Jobs can still be manually invoked. |
+| `WEBJOBS_ROOT_PATH`| Absolute or relative path of webjob files. For a relative path, the value is combined with the default root path (`D:/home/site/wwwroot/` or `/home/site/wwwroot/`). |
+| `WEBJOBS_LOG_TRIGGERED_JOBS_TO_APP_LOGS`| Set to true to send output from triggered WebJobs to the application logs pipeline (which supports file system, blobs, and tables). |
+| `WEBJOBS_SHUTDOWN_FILE` | File that App Service creates when a shutdown request is detected. It's the web job process's responsibility to detect the presence of this file and initiate shutdown. When using the WebJobs SDK, this part is handled automatically. |
+| `WEBJOBS_PATH` | Read-only. Root path of currently running job (will be under some temporary directory). |
+| `WEBJOBS_NAME` | Read-only. Current job name. |
+| `WEBJOBS_TYPE` | Read-only. Current job type (`triggered` or `continuous`). |
+| `WEBJOBS_DATA_PATH` | Read-only. Current job metadata path to contain the job's logs, history, and any artifact of the job. |
+| `WEBJOBS_RUN_ID` | Read-only. For triggered jobs, current run ID of the job. |
+
+## Functions
+
+| Setting name | Description |
+|-|-|
+| `WEBSITE_FUNCTIONS_ARMCACHE_ENABLED` | Set to `0` to disable the functions cache. |
+| `WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+| `FUNCTIONS_EXTENSION_VERSION` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+`AzureWebJobsSecretStorageType` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+| `FUNCTIONS_WORKER_RUNTIME` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+| `AzureWebJobsStorage` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+| `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+| `WEBSITE_CONTENTSHARE` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+| `WEBSITE_CONTENTOVERVNET` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+| `WEBSITE_ENABLE_BROTLI_ENCODING` | [App settings reference for Azure Functions](../azure-functions/functions-app-settings.md) |
+| `WEBSITE_USE_PLACEHOLDER` | Set to `0` to disable the placeholder functions optimization on the consumption plan. The placeholder is an optimization that [improves the cold start](../azure-functions/functions-scale.md#cold-start-behavior). |
+| `WEBSITE_PLACEHOLDER_MODE` | Read-only. Shows whether the function app is running on a placeholder host (`generalized`) or its own host (`specialized`). |
+| `WEBSITE_DISABLE_ZIP_CACHE` | When your app runs from a [ZIP package](deploy-run-package.md) ( `WEBSITE_RUN_FROM_PACKAGE=1`), the five most recently deployed ZIP packages are cached in the app's file system (D:\home\data\SitePackages). Set this variable to `1` to disable this cache. For Linux consumption apps, the ZIP package cache is disabled by default. |
+<!--
+| `FUNCTIONS_RUNTIME_SCALE_MONITORING_ENABLED` | TODO |
+| `WEBSITE_SKIP_FUNCTION_APP_WARMUP` | Apps can use appsetting to opt out of warmup. Restricted to linux only since this is primarily for static sites that use Linux dynamic function apps. Linux dynamic sites are used as placeholder sites for static sites. Function apps don't get specialized until static sites are deployed. This allows function apps used by static sites to skip warmup and using up containers before any content is deployed. TODO |
+ WEBSITE_IDLE_TIMEOUT_IN_MINUTES | removed WEBSITE_IDLE_TIMEOUT_IN_MINUTES because they aren't used in Linux Consumption.???
+| `WEBSITE_DISABLE_FUNCTIONS_STARTUPCONTEXT_CACHE`| This env var can be set to 1 by users in order to avoid using the Functions StartupContext Cache feature. |
+| `WEBSITE_CONTAINER_READY` | The env var is set to '1' to indicate to the Functions Runtime that it can proceed with initializing/specializing
+ // itself. For placeholders, it is set once specialization is complete on DWAS side and detours are reinitialized. For
+ // non-placeholder function apps, it is simply set to 1 when the process is started, because detours are initialized
+ // as part of starting the process (when PicoHelper.dll is loaded, well before any managed code is running).
+ // NOTE: This is set on all sites, irrespective of whether it is a Functions site, because the EnvSettings module depends
+ // upon it to decide when to inject the app-settings.|
+| `WEBSITE_PLACEHOLDER_PING_PATH` | This env var can be used to set a special warmup ping path on placeholder template sites. |
+| ` WEBSITE_PLACEHOLDER_DISABLE_AUTOSPECIALIZATION` | This env var can be used to disabe specialization from being enabled automatically for a given placeholder template site. |
+| `WEBSITE_FUNCTIONS_STARTUPCONTEXT_CACHE` | This env var is set only during specialization of a placeholder, to indicate to the Functions Runtime that
+ // some function-app related data needed at startup, like secrets, are available in a file at the path specified
+ // by this env var. |
+WEBSITE_ENABLE_COLD_START_PROFILING | This env var can be set to 1 by internal SLA sites in order to trigger collection of perf profiles, if feature is enabled on the stamp. |
+WEBSITE_CURRENT_STAMPNAME | these environments contain the stamp name used for various scale decisions |
+WEBSITE_HOME_STAMPNAME | these environments contain the stamp name used for various scale decisions |
+WEBSITE_ELASTIC_SCALING_ENABLED
+WEBSITE_FILECHANGEAUDIT_ENABLED
+| `WEBSITE_HTTPSCALEV2_ENABLED` | This is the default behavior for both v1 and v2 Azure Functions apps. |
+WEBSITE_CHANGEANALYSISSCAN_ENABLED
+WEBSITE_DISABLE_CHILD_SPECIALIZATION
+ -->
+
+<!--
+## Server variables
+|HTTP_HOST| |
+|HTTP_DISGUISED_HOST|the runtime site name for inbound requests.|
+HTTP_CACHE_CONTROL
+HTTP_X_SITE_DEPLOYMENT_ID
+HTTP_WAS_DEFAULT_HOSTNAME
+HTTP_X_ORIGINAL_URL
+HTTP_X_FORWARDED_FOR
+HTTP_X_ARR_SSL
+HTTP_X_FORWARDED_PROTO
+HTTP_X_APPSERVICE_PROTO
+HTTP_X_FORWARDED_TLSVERSION
+X-WAWS-Unencoded-URL
+CLIENT-IP
+X-ARR-LOG-ID
+DISGUISED-HOST
+X-SITE-DEPLOYMENT-ID
+WAS-DEFAULT-HOSTNAME
+X-Original-URL
+X-MS-CLIENT-PRINCIPAL-NAME
+X-MS-CLIENT-DISPLAY-NAME
+X-Forwarded-For
+X-ARR-SSL
+X-Forwarded-Proto
+X-AppService-Proto
+X-Forwarded-TlsVersion
+URL
+HTTP_CLIENT_IP
+APP_WARMING_UP |Regular/external requests made while warmup is in progress will have a APP_WARMING_UP server variable set to 1|
+HTTP_COOKIE
+SERVER_NAME
+HTTP_X_FORWARDED_HOST
+| HTTP_X_AZURE_FDID | Azure Front Door ID. See [](../frontdoor/front-door-faq.md#how-do-i-lock-down-the-access-to-my-backend-to-only-azure-front-door-) |
+HTTP_X_FD_HEALTHPROBE
+|WEBSITE_LOCALCACHE_ENABLED | shows up in w3wp.exe worker process |
+HTTP_X_ARR_LOG_ID
+| SCM_BASIC_AUTH_ALLOWED | set to "false" or "0" to disable basic authentication |
+HTTP_X_MS_WAWS_JWT
+HTTP_MWH_SecurityToken
+LB_ALGO_FOR_HOSTING_SERVER_PROVIDER
+ENABLE_CLIENT_AFFINITY
+HTTP_X_MS_FROM_GEOMASTER
+HTTP_X_MS_USE_GEOMASTER_CERT
+HTTP_X_MS_STAMP_TOKEN
+HTTPSCALE_REQUEST_ID
+HTTPSCALE_FORWARD_FRONTEND_KEY
+HTTPSCALE_FORWARD_REQUEST
+IS_VALID_STAMP_TOKEN
+NEEDS_SITE_RESTRICTED_TOKEN
+HTTP_X_MS_PRIVATELINK_ID
+ -->
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-auth-aad.md
Save your settings by clicking **PUT**.
Your apps are now configured. The front end is now ready to access the back end with a proper access token.
-For information on how to configure the access token for other providers, see [Refresh identity provider tokens](app-service-authentication-how-to.md#refresh-identity-provider-tokens).
+For information on how to configure the access token for other providers, see [Refresh identity provider tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
## Call API securely from server code In this step, you enable your previously modified server code to make authenticated calls to the back-end API.
-Your front-end app now has the required permission and also adds the back end's client ID to the login parameters. Therefore, it can obtain an access token for authentication with the back-end app. App Service supplies this token to your server code by injecting a `X-MS-TOKEN-AAD-ACCESS-TOKEN` header to each authenticated request (see [Retrieve tokens in app code](app-service-authentication-how-to.md#retrieve-tokens-in-app-code)).
+Your front-end app now has the required permission and also adds the back end's client ID to the login parameters. Therefore, it can obtain an access token for authentication with the back-end app. App Service supplies this token to your server code by injecting a `X-MS-TOKEN-AAD-ACCESS-TOKEN` header to each authenticated request (see [Retrieve tokens in app code](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code)).
> [!NOTE] > These headers are injected for all supported languages. You access them using the standard pattern for each respective language.
Congratulations! Your server code is now accessing the back-end data on behalf o
In this step, you point the front-end Angular.js app to the back-end API. This way, you learn how to retrieve the access token and make API calls to the back-end app with it.
-While the server code has access to request headers, client code can access `GET /.auth/me` to get the same access tokens (see [Retrieve tokens in app code](app-service-authentication-how-to.md#retrieve-tokens-in-app-code)).
+While the server code has access to request headers, client code can access `GET /.auth/me` to get the same access tokens (see [Retrieve tokens in app code](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code)).
> [!TIP] > This section uses the standard HTTP methods to demonstrate the secure HTTP calls. However, you can use [Microsoft Authentication Library for JavaScript](https://github.com/AzureAD/microsoft-authentication-library-for-js) to help simplify the Angular.js application pattern.
Congratulations! Your client code is now accessing the back-end data on behalf o
## When access tokens expire
-Your access token expires after some time. For information on how to refresh your access tokens without requiring users to reauthenticate with your app, see [Refresh identity provider tokens](app-service-authentication-how-to.md#refresh-identity-provider-tokens).
+Your access token expires after some time. For information on how to refresh your access tokens without requiring users to reauthenticate with your app, see [Refresh identity provider tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
## Clean up resources
application-gateway Application Gateway Troubleshooting 502 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-troubleshooting-502.md
Validate that the Custom Health Probe is configured correctly as the preceding t
### Cause
-When a user request is received, the application gateway applies the configured rules to the request and routes it to a back-end pool instance. It waits for a configurable interval of time for a response from the back-end instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway does not receive a response from back-end application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway does not receive a resposne from the back-end application in this interval, the request will be tried against a second back-end pool member. If the second request fails the user request gets a 502 error.
+When a user request is received, the application gateway applies the configured rules to the request and routes it to a back-end pool instance. It waits for a configurable interval of time for a response from the back-end instance. By default, this interval is **20** seconds. In Application Gateway v1, if the application gateway does not receive a response from back-end application in this interval, the user request gets a 502 error. In Application Gateway v2, if the application gateway does not receive a response from the back-end application in this interval, the request will be tried against a second back-end pool member. If the second request fails the user request gets a 502 error.
### Solution
application-gateway Monitor Application Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/monitor-application-gateway-reference.md
+
+ Title: Monitoring Azure Application Gateway data reference
+description: Important reference material needed when you monitor Application Gateway
+++++ Last updated : 06/10/2021+
+<!-- VERSION 2.2
+Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
+
+# Monitoring Azure Application Gateway data reference
+
+See [Monitoring Azure Application Gateway](monitor-application-gateway.md) for details on collecting and analyzing monitoring data for Azure Application Gateway.
+
+## Metrics
+
+<!-- REQUIRED if you support Metrics. If you don't, keep the section but call that out. Some services are only onboarded to logs.
+<!-- Please keep headings in this order -->
+
+<!-- OPTION 2 - Link to the metrics as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the metrics-supported link. For highly customized example, see [CosmosDB](https://docs.microsoft.com/azure/cosmos-db/monitor-cosmos-db-reference#metrics). They even regroup the metrics into usage type vs. resource provider and type.
+-->
+
+<!-- Example format. Mimic the setup of metrics supported, but add extra information -->
+
+### Application Gateway v2 metrics
+
+Resource Provider and Type: [Microsoft.Compute/applicationGateways](/azure/azure-monitor/platform/metrics-supported#microsoftnetworkapplicationgateways)
+
+#### Timing metrics
+Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds.
+
+> [!NOTE]
+>
+> If the Application Gateway has more than one listener, then always filter by the *Listener* dimension while comparing different latency metrics to get more meaningful inference.
++
+| Metric | Unit | Description|
+|:-|:--|:|
+|**Backend connect time**|milliseconds|Time spent establishing a connection with the backend application.<br><br>This includes the network latency as well as the time taken by the backend serverΓÇÖs TCP stack to establish new connections. In case of TLS, it also includes the time spent on handshake.|
+|**Backend first byte response time**|milliseconds|Time interval between start of establishing a connection to backend server and receiving the first byte of the response header.<br><br>This approximates the sum of Backend connect time, time taken by the request to reach the backend from Application Gateway, time taken by backend application to respond (the time the server took to generate content, potentially fetch database queries), and the time taken by first byte of the response to reach the Application Gateway from the backend.|
+|**Backend last byte response time**|milliseconds|Time interval between start of establishing a connection to backend server and receiving the last byte of the response body.<br><br>This approximates the sum of backend first byte response time and data transfer time. This number may vary greatly depending on the size of objects requested and the latency of the server network.|
+|**Application gateway total time**|milliseconds|Average time that it takes for a request to be received, processed and its response to be sent.<br><br>This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. This includes the processing time taken by Application Gateway, the Backend last byte response time, the time taken by Application Gateway to send all the response, and the Client RTT.|
+|**Client RTT**|milliseconds|Average round trip time between clients and Application Gateway.|
+
+These metrics can be used to determine whether the observed slowdown is due to the client network, Application Gateway performance, the backend network and backend server TCP stack saturation, backend application performance, or large file size.
+
+For example, If thereΓÇÖs a spike in *Backend first byte response time* trend but the *Backend connect time* trend is stable, then it can be inferred that the Application gateway to backend latency and the time taken to establish the connection is stable, and the spike is caused due to an increase in the response time of backend application. On the other hand, if the spike in *Backend first byte response time* is associated with a corresponding spike in *Backend connect time*, then it can be deduced that either the network between Application Gateway and backend server or the backend server TCP stack has saturated.
+
+If you notice a spike in *Backend last byte response time* but the *Backend first byte response time* is stable, then it can be deduced that the spike is because of a larger file being requested.
+
+Similarly, if the *Application gateway total time* has a spike but the *Backend last byte response time* is stable, then it can either be a sign of performance bottleneck at the Application Gateway or a bottleneck in the network between client and Application Gateway. Additionally, if the *client RTT* also has a corresponding spike, then it indicates that that the degradation is because of the network between client and Application Gateway.
+
+#### Application Gateway metrics
+
+| Metric | Unit | Description|
+|:-|:--|:|
+|**Bytes received**|bytes|Count of bytes received by the Application Gateway from the clients.|
+|**Bytes sent**|bytes|Count of bytes sent by the Application Gateway to the clients.|
+|**Client TLS protocol**|count|Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the TLS Protocol dimension.|
+|**Current capacity units**|count|Count of capacity units consumed to load balance the traffic. There are three determinants to capacity unit - compute unit, persistent connections, and throughput. Each capacity unit is composed of at most: one compute unit, or 2500 persistent connections, or 2.22-Mbps throughput.|
+|**Current compute units**|count|Count of processor capacity consumed. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing.|
+|**Current connections**|count|The total number of concurrent connections active from clients to the Application Gateway.|
+|**Estimated Billed Capacity units**|count|With the v2 SKU, the pricing model is driven by consumption. Capacity units measure consumption-based cost that is charged in addition to the fixed cost. *Estimated Billed Capacity units* indicates the number of capacity units using which the billing is estimated. This is calculated as the greater value between *Current capacity units* (capacity units required to load balance the traffic) and *Fixed billable capacity units* (minimum capacity units kept provisioned).|
+|**Failed Requests**|count|Number of requests that Application Gateway has served with 5xx server error codes. This includes the 5xx codes that are generated from the Application Gateway as well as the 5xx codes that are generated from the backend. The request count can be further filtered to show count per each/specific backend pool-http setting combination.|
+|**Fixed Billable Capacity Units**|count|The minimum number of capacity units kept provisioned as per the *Minimum scale units* setting (one instance translates to 10 capacity units) in the Application Gateway configuration.|
+|**New connections per second**|count|The average number of new TCP connections per second established from clients to the Application Gateway and from the Application Gateway to the backend members.|
+|**Response Status**|status code|HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.|
+|**Throughput**|bytes/sec|Number of bytes per second the Application Gateway has served.|
+|**Total Requests**|count|Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.|
+
+#### Backend metrics
+
+| Metric | Unit | Description|
+|:-|:--|:|
+|**Backend response status**|count|Count of HTTP response status codes returned by the backends. This does not include any response codes generated by the Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.|
+|**Healthy host count**|count|The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool.|
+|**Unhealthy host count**|count|The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.|
+|**Requests per minute per Healthy Host**|count|The average number of requests received by each healthy member in a backend pool in a minute. You must specify the backend pool using the *BackendPool HttpSettings* dimension.|
++
+### Application Gateway v1 metrics
+
+#### Application Gateway metrics
+
+| Metric | Unit | Description|
+|:-|:--|:|
+|**CPU Utilization**|percent|Displays the utilization of the CPUs allocated to the Application Gateway. Under normal conditions, CPU usage should not regularly exceed 90%, as this may cause latency in the websites hosted behind the Application Gateway and disrupt the client experience. You can indirectly control or improve CPU utilization by modifying the configuration of the Application Gateway by increasing the instance count or by moving to a larger SKU size, or doing both.|
+|**Current connections**|count|Count of current connections established with Application Gateway.|
+|**Failed Requests**|count|Number of requests that failed due to connection issues. This count includes requests that failed due to exceeding the *Request time-out* HTTP setting and requests that failed due to connection issues between Application Gateway and the backend. This count does not include failures due to no healthy backend being available. 4xx and 5xx responses from the backend are also not considered as part of this metric.|
+|**Response Status**|status code|HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.|
+|**Throughput**|bytes/sec|Number of bytes per second the Application Gateway has served.|
+|**Total Requests**|count|Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.|
+|**Web Application Firewall Blocked Requests Count**|count|Number of requests blocked by WAF.|
+|**Web Application Firewall Blocked Requests Distribution**|count|Number of requests blocked by WAF filtered to show count per each/specific WAF rule group or WAF rule ID combination.|
+|**Web Application Firewall Total Rule Distribution**|count|Number of requests received per each specific WAF rule group or WAF rule ID combination.|
++
+<!-- Keep this text as-is -->
+For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+++
+## Metric Dimensions
+
+<!-- REQUIRED. Please keep headings in this order -->
+<!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com -->
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
++
+<!-- See https://docs.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
+
+Azure Application Gateway supports dimensions for some of the metrics in Azure Monitor. Each metric includes a description that explains the available dimensions specifically for that metric.
++
+## Resource logs
+<!-- REQUIRED. Please keep headings in this order -->
+
+This section lists the types of resource logs you can collect for Azure Application Gateway.
+
+<!-- List all the resource log types you can have and what they are for -->
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
+
+> [!NOTE]
+> The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.
+
+For more information, see [Back-end health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md#access-log)
+
+<!-- OPTION 2 - Link to the resource logs as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU WILL NOW HAVE TO MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the resource-log-categories link. You can group these sections however you want provided you include the proper links back to resource-log-categories article.
+-->
+
+<!-- Example format. Add extra information -->
+
+### Application Gateway
+
+Resource Provider and Type: [Microsoft.Network/applicationGateways](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkapplicationgateways)
+
+| Category | Display Name | Information|
+|:|:-||
+| **Activitylog** | Activity log | Activity log entries are collected by default. You can use [Azure activity logs](../azure-resource-manager/management/view-activity-logs.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. |
+|**ApplicationGatewayAccessLog**|Access log| You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP address, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.|
+| **ApplicationGatewayPerformanceLog**|Performance log|You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy back-end instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](#metrics) for performance data.|
+|**ApplicationGatewayFirewallLog**|Firewall log|You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds.|
++
+## Azure Monitor Logs tables
+<!-- REQUIRED. Please keep heading in this order -->
+
+This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Application Gateway and available for query by Log Analytics.
++
+<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://docs.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
+-->
+
+<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
+
+|Resource Type | Notes |
+|-|--|
+| [Application Gateway](/azure/azure-monitor/reference/tables/tables-resourcetype#application-gateways) |Includes AzureActivity, AzureDiagnostics, and AzureMetrics |
++
+For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+
+### Diagnostics tables
+<!-- REQUIRED. Please keep heading in this order -->
+<!-- If your service uses the AzureDiagnostics table in Azure Monitor Logs / Log Analytics, list what fields you use and what they are for. Azure Diagnostics is over 500 columns wide with all services using the fields that are consistent across Azure Monitor and then adding extra ones just for themselves. If it uses service specific diagnostic table, refers to that table. If it uses both, put both types of information in. Most services in the future will have their own specific table. If you have questions, contact azmondocs@microsoft.com -->
+
+Azure Application Gateway uses the [Azure Diagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table to store resource log information. The following columns are relevant.
+
+**Azure Diagnostics**
+
+| Property | Description |
+|: |:|
+requestUri_s | The URI of the client request.|
+Message | informational messages such as "SQL Injection Attack"|
+userAgent_s | User agent details of the client request|
+ruleName_s | Request routing rule which is used to serve this request|
+httpMethod_s | HTTP method of the client request|
+instanceId_s | The Appgw instance to which the client request is routed to for evaluation|
+httpVersion_s | HTTP version of the client request|
+clientIP_s | IP from which is request is made|
+host_s | Host header of the client request|
+requestQuery_s | Query string as part of the client request|
+sslEnabled_s | Does the client request have SSL enabled|
++
+## See Also
+
+<!-- replace below with the proper link to your main monitoring service article -->
+- See [Monitoring Azure Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Azure Application Gateway.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
application-gateway Monitor Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/monitor-application-gateway.md
+
+ Title: Monitoring Azure Application Gateway
+description: Start here to learn how to monitor Azure Application Gateway
++++ Last updated : 06/10/2021++
+<!-- VERSION 2.2
+Template for the main monitoring article for Azure services.
+Keep the required sections and add/modify any content for any information specific to your service.
+This article should be in your TOC with the name *monitor-[Azure Application Gateway].md* and the TOC title "Monitor Azure Application Gateway".
+Put accompanying reference information into an article in the Reference section of your TOC with the name *monitor-[service-name]-reference.md* and the TOC title "Monitoring data".
+Keep the headings in this order.
+-->
+
+# Monitoring Azure Application Gateway
+<!-- REQUIRED. Please keep headings in this order -->
+<!-- Most services can use this section unchanged. Add to it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
++
+<!-- Optional diagram showing monitoring for your service. If you need help creating one, contact robb@microsoft.com -->
+
+## Monitoring overview page in Azure portal
+<!-- OPTIONAL. Please keep headings in this order -->
+<!-- Most services can use this section unchanged. Edit it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
+
+The **Overview** page in the Azure portal for each Application Gateway includes the following metrics:
+
+- Sum Total Requests
+- Sum Failed Requests
+- Sum Response Status by HttpStatus
+- Sum Throughput
+- Sum CurrentConnections
+- Avg Healthy Host Count By BackendPool HttpSettings
+- Avg Unhealthy Host Count By BackendPool HttpSettings
+
+This is just a subset of the metrics available for Application Gateway. For more information, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md).
++
+## Azure Monitor Network Insights
+
+<!-- OPTIONAL SECTION. Only include if your service has an "insight" associated with it. Examples of insights include
+ - CosmosDB https://docs.microsoft.com/azure/azure-monitor/insights/cosmosdb-insights-overview
+ - If you still aren't sure, contact azmondocs@microsoft.com.>
+-->
+
+Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
+
+<!-- Give a quick outline of what your "insight page" provides and refer to another article that gives details -->
+
+Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources (including Application Gateway), without requiring any configuration. For more information, see [Azure Monitor Network Insights](../azure-monitor/insights/network-insights-overview.md).
+
+## Monitoring data
+
+<!-- REQUIRED. Please keep headings in this order -->
+Azure Application Gateway collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data-from-Azure-resources).
+
+See [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md) for detailed information on the metrics and logs metrics created by Azure Application Gateway.
+
+<!-- If your service has additional non-Azure Monitor monitoring data then outline and refer to that here. Also include that information in the data reference as appropriate. -->
+
+## Collection and routing
+
+<!-- REQUIRED. Please keep headings in this order -->
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+<!-- Include any additional information on collecting logs. The number of things that diagnostics settings control is expanding -->
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Application Gateway are listed in [Azure Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs).
+
+<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://docs.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+<!-- REQUIRED. Please keep headings in this order
+If you don't support metrics, say so. Some services may be only onboarded to logs -->
+
+You can analyze metrics for Azure Application Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/platform/metrics-getting-started) for details on using this tool.
+
+<!-- Point to the list of metrics available in your monitor-service-reference article. -->
+For a list of the platform metrics collected for Azure Application Gateway, see [Monitoring Application Gateway data reference metrics](monitor-application-gateway-reference.md#metrics).
++
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
+
+<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
+
+## Analyzing logs
+
+<!-- REQUIRED. Please keep headings in this order
+If you don't support resource logs, say so. Some services may be only onboarded to metrics and the activity log. -->
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Common and service-specific schema for Azure Resource Logs](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema).
+
+The [Activity log](/azure/azure-monitor/platform/activity-log) is a platform login Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for Azure Application Gateway, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md#resource-logs).
+
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md#azure-monitor-logs-tables).
+
+<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about log usage or what logs are most important. Remember that the UI is subject to change quite often so you will need to maintain these screenshots yourself if you add them in. -->
+
+### Sample Kusto queries
+
+<!-- REQUIRED if you support logs. Please keep headings in this order -->
+<!-- Add sample Log Analytics Kusto queries for your service. -->
+
+> [!IMPORTANT]
+> When you select **Logs** from the Application Gateway menu, Log Analytics is opened with the query scope set to the current Application Gateway. This means that log queries will only include data from that resource. If you want to run a query that includes data from other Application Gateways or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/log-query/scope/) for details.
+
+<!-- REQUIRED: Include queries that are helpful for figuring out the health and state of your service. Ideally, use some of these queries in the alerts section. It's possible that some of your queries may be in the Log Analytics UI (sample or example queries). Check if so. -->
+
+You can use the following queries to help you monitor your Application Gateway resource.
+
+<!-- Put in a code section here. -->
+```Kusto
+// Requests per hour
+// Count of the incoming requests on the Application Gateway.
+// To create an alert for this query, click '+ New alert rule'
+AzureDiagnostics
+| where ResourceType == "APPLICATIONGATEWAYS" and OperationName == "ApplicationGatewayAccess"
+| summarize AggregatedValue = count() by bin(TimeGenerated, 1h), _ResourceId
+| render timechart
+```
+
+```kusto
+// Failed requests per hour
+// Count of requests to which Application Gateway responded with an error.
+// To create an alert for this query, click '+ New alert rule'
+AzureDiagnostics
+| where ResourceType == "APPLICATIONGATEWAYS" and OperationName == "ApplicationGatewayAccess" and httpStatus_d > 399
+| summarize AggregatedValue = count() by bin(TimeGenerated, 1h), _ResourceId
+| render timechart
+```
+
+```kusto
+// Top 10 Client IPs
+// Count of requests per client IP.
+AzureDiagnostics
+| where ResourceType == "APPLICATIONGATEWAYS" and OperationName == "ApplicationGatewayAccess"
+| summarize AggregatedValue = count() by clientIP_s
+| top 10 by AggregatedValue
+```
+
+```kusto
+// Errors by user agent
+// Number of errors by user agent.
+// To create an alert for this query, click '+ New alert rule'
+AzureDiagnostics
+| where ResourceType == "APPLICATIONGATEWAYS" and OperationName == "ApplicationGatewayAccess" and httpStatus_d > 399
+| summarize AggregatedValue = count() by userAgent_s, _ResourceId
+| sort by AggregatedValue desc
+```
+
+## Alerts
+
+<!-- SUGGESTED: Include useful alerts on metrics, logs, log conditions or activity log. Ask your PMs if you don't know.
+This information is the BIGGEST request we get in Azure Monitor so do not avoid it long term. People don't know what to monitor for best results. Be prescriptive
+-->
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/platform/alerts-metric-overview), [logs](/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](/azure/azure-monitor/platform/activity-log-alerts). Different types of alerts have benefits and drawbacks
+
+<!-- only include next line if applications run on your service and work with App Insights. -->
+
+If you are creating or running an application which use Application Gateway [Azure Monitor Application Insights](/azure/azure-monitor/overview#application-insights) may offer additional types of alerts.
+<!-- end -->
+
+The following tables lists common and recommended alert rules for Application Gateway.
+
+<!-- Fill in the table with metric and log alerts that would be valuable for your service. Change the format as necessary to make it more readable -->
+
+**Application Gateway v1**
+
+| Alert type | Condition | Description |
+|:|:|:|
+|Metric|CPU utilization crosses 80%|Under normal conditions, CPU usage should not regularly exceed 90%, as this may cause latency in the websites hosted behind the Application Gateway and disrupt the client experience.|
+|Metric|Unhealthy host count crosses threshold|Indicates the number of backend servers that Application Gateway is unable to probe successfully. This catches issues where the Application Gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.|
+|Metric|Response status (4xx, 5xx) crosses threshold|When Application Gateway response status is 4xx or 5xx. There could be occasional 4xx or 5xx response seen due to transient issues. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
+|Metric|Failed requests crosses threshold|When failed requests metric crosses a threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
++
+**Application Gateway v2**
+
+| Alert type | Condition | Description |
+|:|:|:|
+|Metric|Compute Unit utilization crosses 75% of average usage|Compute unit is the measure of compute utilization of your Application Gateway. Check your average compute unit usage in the last one month and set alert if it crosses 75% of it.|
+|Metric|Capacity Unit utilization crosses 75% of peak usage|Capacity units represent overall gateway utilization in terms of throughput, compute, and connection count. Check your maximum capacity unit usage in the last one month and set alert if it crosses 75% of it.|
+|Metric|Unhealthy host count crosses threshold|Indicates number of backend servers that application gateway is unable to probe successfully. This will catch issues where Application gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.|
+|Metric|Response status (4xx, 5xx) crosses threshold|When Application Gateway response status is 4xx or 5xx. There could be occasional 4xx or 5xx response seen due to transient issues. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
+|Metric|Failed requests crosses threshold|When Failed requests metric crosses threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
+|Metric|Backend last byte response time crosses threshold|Indicates the time interval between start of establishing a connection to backend server and receiving the last byte of the response body. Create an alert if the backend response latency is more that certain threshold from usual.|
+|Metric|Application Gateway total time crosses threshold|This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. Should create an alert if the backend response latency is more that certain threshold from usual.|
+
+## Next steps
+
+<!-- Add additional links. You can change the wording of these and add more if useful. -->
+
+- See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway.
+
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resource) for details on monitoring Azure resources.
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-linux-hrw-install.md
Title: Deploy a Linux Hybrid Runbook Worker in Azure Automation
description: This article tells how to install an Azure Automation Hybrid Runbook Worker to run runbooks on Linux-based machines in your local datacenter or cloud environment. Previously updated : 04/06/2021 Last updated : 06/29/2021
The Hybrid Runbook Worker feature supports the following distributions. All oper
* Oracle Linux 5, 6, and 7 * Red Hat Enterprise Linux Server 5, 6, 7, and 8 * Debian GNU/Linux 6, 7, and 8
-* Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS, and 18.04 LTS
-* SUSE Linux Enterprise Server 12 and 15 (SUSE did not release versions numbered 13 or 14)
+* Ubuntu 12.04 LTS, 14.04 LTS, 16.04 LTS, 18.04, and 20.04 LTS
+* SUSE Linux Enterprise Server 12, 15, and 15.1 (SUSE did not release versions numbered 13 or 14)
> [!IMPORTANT] > Before enabling the Update Management feature, which depends on the system Hybrid Runbook Worker role, confirm the distributions it supports [here](update-management/operating-system-requirements.md).
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-role-based-access-control.md
Perform the following steps to create the Azure Automation custom role in the Az
} ```
-1. Complete the remaining steps as outlined in [Create or update Azure custom roles using the Azure portal](/azure/role-based-access-control/custom-roles-portal#start-from-json). For [Step 3:Basics](/azure/role-based-access-control/custom-roles-portal#step-3-basics), note the following:
+1. Complete the remaining steps as outlined in [Create or update Azure custom roles using the Azure portal](../role-based-access-control/custom-roles-portal.md#start-from-json). For [Step 3:Basics](../role-based-access-control/custom-roles-portal.md#step-3-basics), note the following:
- In the **Custom role name** field, enter **Automation account Contributor (custom)** or a name matching your naming standards. - For **Baseline permissions**, select **Start from JSON**. Then select the custom JSON file you saved earlier.
automation Automation Region Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/how-to/automation-region-dns-records.md
Title: Azure Datacenter DNS records used by Azure Automation | Microsoft Docs
description: This article provides the DNS records required by Azure Automation features when restricting communication to a specific Azure region hosting that Automation account. Previously updated : 11/25/2020 Last updated : 06/29/2021
The following table provides the DNS record for each region.
| North Europe |ne-jobruntimedata-prod-su1.azure-automation.net</br>ne-agentservice-prod-1.azure-automation.net | | South Central US |scus-jobruntimedata-prod-su1.azure-automation.net</br>scus-agentservice-prod-1.azure-automation.net | | South East Asia |sea-jobruntimedata-prod-su1.azure-automation.net</br>sea-agentservice-prod-1.azure-automation.net|
+| Switzerland North |stzn-jobruntimedata-prod-su1.azure-automation.net</br>stzn-agentservice-prod-su1.azure-automation.net|
| UK South | uks-jobruntimedata-prod-su1.azure-automation.net</br>uks-agentservice-prod-1.azure-automation.net | | US Gov Virginia | usge-jobruntimedata-prod-su1.azure-automation.us<br>usge-agentservice-prod-1.azure-automation.us | | West Central US | wcus-jobruntimedata-prod-su1.azure-automation.net</br>wcus-agentservice-prod-1.azure-automation.net |
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/operating-system-requirements.md
Title: Azure Automation Update Management Supported Clients
description: This article describes the supported Windows and Linux operating systems with Azure Automation Update Management. Previously updated : 06/22/2021 Last updated : 06/29/2021
The following table lists the supported operating systems for update assessments
|Windows Server 2019 (Datacenter/Standard including Server Core)<br><br>Windows Server 2016 (Datacenter/Standard excluding Server Core)<br><br>Windows Server 2012 R2(Datacenter/Standard)<br><br>Windows Server 2012 | | |Windows Server 2008 R2 (RTM and SP1 Standard)| Update Management supports assessments and patching for this operating system. The [Hybrid Runbook Worker](../automation-windows-hrw-install.md) is supported for Windows Server 2008 R2. | |CentOS 6, 7, and 8 (x64) | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). |
+|Oracle Linux 6.x and 7.x (x64) | Linux agents require access to an update repository. |
|Red Hat Enterprise 6, 7, and 8 (x64) | Linux agents require access to an update repository. | |SUSE Linux Enterprise Server 12, 15, and 15.1 (x64) | Linux agents require access to an update repository. | |Ubuntu 14.04 LTS, 16.04 LTS, 18.04 LTS, and 20.04 LTS (x64) |Linux agents require access to an update repository. |
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
Title: Quickstart for adding feature flags to Spring Boot with Azure App Configuration description: Add feature flags to Spring Boot apps and manage them using Azure App Configuration-+ Previously updated : 08/06/2020- Last updated : 06/25/2021+ #Customer intent: As an Spring Boot developer, I want to use feature flags to control feature availability quickly and confidently.
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
1. Open the *pom.xml* file in a text editor and add the following to the list of `<dependencies>`:
- **Spring Cloud 1.1.x**
-
- ```xml
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>1.1.5</version>
- </dependency>
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>1.1.5</version>
- </dependency>
- <dependency>
- <groupId>org.springframework.boot</groupId>
- <artifactId>spring-boot-starter-thymeleaf</artifactId>
- </dependency>
- ```
-
- **Spring Cloud 1.2.x**
- ```xml <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>1.2.7</version>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
+ <version>2.0.0-beta.2</version>
</dependency> <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>1.2.7</version>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-feature-management-web</artifactId>
+ <version>2.0.0</version>
</dependency> <dependency> <groupId>org.springframework.boot</groupId>
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
</dependency> ```
-> [!Note]
-> There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/microsoft/spring-cloud-azure) for differences.
+> [!NOTE]
+> * If you need to support an older version of Spring Boot see our [old appconfiguration library](https://github.com/Azure/azure-sdk-for-jav).
+> * There is a non-web Feature Management Library that doesn't have a dependency on spring-web. Refer to GitHub's [documentation](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/appconfiguration/azure-spring-cloud-feature-management) for differences.
## Connect to an App Configuration store
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
import org.springframework.stereotype.Controller; import org.springframework.ui.Model;
- import com.microsoft.azure.spring.cloud.feature.manager.FeatureManager;
+ import com.azure.spring.cloud.feature.manager.FeatureManager;
import org.springframework.web.bind.annotation.GetMapping;
In this quickstart, you created a new App Configuration store and used it to man
* Learn more about [feature management](./concept-feature-management.md). * [Manage feature flags](./manage-feature-flags.md).
-* [Use feature flags in a Spring Boot Core app](./use-feature-flags-spring-boot.md).
+* [Use feature flags in a Spring Boot Core app](./use-feature-flags-spring-boot.md).
azure-app-configuration Use Feature Flags Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-feature-flags-spring-boot.md
ms.devlang: java Previously updated : 09/26/2019 Last updated : 06/25/2021
We recommend that you keep feature flags outside the application and manage them
The easiest way to connect your Spring Boot application to App Configuration is through the configuration provider:
-### Spring Cloud 1.1.x
- ```xml <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>1.1.2</version>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-feature-management-web</artifactId>
+ <version>2.0.0</version>
</dependency> ```
-### Spring Cloud 1.2.x
-
-```xml
-<dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-feature-management-web</artifactId>
- <version>1.2.2</version>
-</dependency>
-```
+> [!NOTE]
+> If you need to support an older version of Spring Boot see our [old library](https://github.com/Azure/azure-sdk-for-jav).
## Feature flag declaration
public String getOldFeature() {
## Next steps
-In this tutorial, you learned how to implement feature flags in your Spring Boot application by using the `spring-cloud-azure-feature-management-web` libraries. For more information about feature management support in Spring Boot and App Configuration, see the following resources:
+In this tutorial, you learned how to implement feature flags in your Spring Boot application by using the `azure-spring-cloud-feature-management-web` libraries. For more information about feature management support in Spring Boot and App Configuration, see the following resources:
* [Spring Boot feature flag sample code](./quickstart-feature-flag-spring-boot.md) * [Manage feature flags](./manage-feature-flags.md)
azure-functions Durable Functions Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-diagnostics.md
Title: Diagnostics in Durable Functions - Azure
description: Learn how to diagnose problems with the Durable Functions extension for Azure Functions. Previously updated : 05/12/2021 Last updated : 06/29/2021
The verbosity of tracking data emitted to Application Insights can be configured
} ```
-By default, all non-replay tracking events are emitted. The volume of data can be reduced by setting `Host.Triggers.DurableTask` to `"Warning"` or `"Error"` in which case tracking events will only be emitted for exceptional situations.
-
-To enable emitting the verbose orchestration replay events, the `LogReplayEvents` can be set to `true` in the `host.json` file under `durableTask` as shown:
-
-#### Functions 1.0
-
-```json
-{
- "durableTask": {
- "logReplayEvents": true
- }
-}
-```
-
-#### Functions 2.0
-
-```json
-{
- "extensions": {
- "durableTask": {
- "logReplayEvents": true
- }
- }
-}
-```
+By default, all _non-replay_ tracking events are emitted. The volume of data can be reduced by setting `Host.Triggers.DurableTask` to `"Warning"` or `"Error"` in which case tracking events will only be emitted for exceptional situations. To enable emitting the verbose orchestration replay events, set the `logReplayEvents` to `true` in the [host.json](durable-functions-bindings.md#host-json) configuration file.
> [!NOTE] > By default, Application Insights telemetry is sampled by the Azure Functions runtime to avoid emitting data too frequently. This can cause tracking information to be lost when many lifecycle events occur in a short period of time. The [Azure Functions Monitoring article](../configure-monitoring.md#configure-sampling) explains how to configure this behavior.
+Inputs and outputs of orchestrator, activity, and entity functions are not logged by default. This default behavior is recommended because logging inputs and outputs could increase Application Insights costs. Function input and output payloads may also contain sensitive information. Instead, the number of bytes for function inputs and outputs are logged instead of the actual payloads by default. If you want the Durable Functions extension to log the full input and output payloads, set the `traceInputsAndOutputs` property to `true` in the [host.json](durable-functions-bindings.md#host-json) configuration file.
+ ### Single instance query The following query shows historical tracking data for a single instance of the [Hello Sequence](durable-functions-sequence.md) function orchestration. It's written using the [Kusto Query Language](/azure/data-explorer/kusto/query/). It filters out replay execution so that only the *logical* execution path is shown. Events can be ordered by sorting by `timestamp` and `sequenceNumber` as shown in the query below:
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-timers.md
public static async Task Run(
```js const df = require("durable-functions");
-const moment = require("moment");
+const { DateTime } = require("luxon");
module.exports = df.orchestrator(function*(context) { for (let i = 0; i < 10; i++) {
- const deadline = moment.utc(context.df.currentUtcDateTime).add(1, 'd');
- yield context.df.createTimer(deadline.toDate());
+ const deadline = DateTime.fromJSDate(context.df.currentUtcDateTime, {zone: 'utc'}).plus({ days: 1 });
+ yield context.df.createTimer(deadline.toJSDate());
yield context.df.callActivity("SendBillingEvent"); } });
public static async Task<bool> Run(
```js const df = require("durable-functions");
-const moment = require("moment");
+const { DateTime } = require("luxon");
module.exports = df.orchestrator(function*(context) {
- const deadline = moment.utc(context.df.currentUtcDateTime).add(30, "s");
+ const deadline = DateTime.fromJSDate(context.df.currentUtcDateTime, {zone: 'utc'}).plus({ seconds: 30 });
const activityTask = context.df.callActivity("GetQuote");
- const timeoutTask = context.df.createTimer(deadline.toDate());
+ const timeoutTask = context.df.createTimer(deadline.toJSDate());
const winner = yield context.df.Task.any([activityTask, timeoutTask]); if (winner === activityTask) {
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-http-webhook-trigger.md
You can programmatically access the `invoke_URL_template` by using the Azure Res
## Working with client identities
-If your function app is using [App Service Authentication / Authorization](../app-service/overview-authentication-authorization.md), you can view information about authenticated clients from your code. This information is available as [request headers injected by the platform](../app-service/app-service-authentication-how-to.md#access-user-claims).
+If your function app is using [App Service Authentication / Authorization](../app-service/overview-authentication-authorization.md), you can view information about authenticated clients from your code. This information is available as [request headers injected by the platform](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
You can also read this information from binding data. This capability is only available to the Functions runtime in 2.x and higher. It is also currently only available for .NET languages.
public static void Run(JObject input, ClaimsPrincipal principal, ILogger log)
# [Java](#tab/java)
-The authenticated user is available via [HTTP Headers](../app-service/app-service-authentication-how-to.md#access-user-claims).
+The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
# [JavaScript](#tab/javascript)
-The authenticated user is available via [HTTP Headers](../app-service/app-service-authentication-how-to.md#access-user-claims).
+The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
# [PowerShell](#tab/powershell)
-The authenticated user is available via [HTTP Headers](../app-service/app-service-authentication-how-to.md#access-user-claims).
+The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
# [Python](#tab/python)
-The authenticated user is available via [HTTP Headers](../app-service/app-service-authentication-how-to.md#access-user-claims).
+The authenticated user is available via [HTTP Headers](../app-service/configure-authentication-user-identities.md#access-user-claims-in-app-code).
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-host-json.md
An array of one or more names of files that are monitored for changes that requi
## Override host.json values
-There may be instances where you wish to configure or modify specific settings in a host.json file for a specific environment, without changing the host.json file itself. You can override specific host.json values be creating an equivalent value as an application setting. When the runtime finds an application setting in the format `AzureFunctionsJobHost__path__to__setting`, it overrides the equivalent host.json setting located at `path.to.setting` in the JSON. When expressed as an application setting, the dot (`.`) used to indicate JSON hierarchy is replaced by a double underscore (`__`).
+There may be instances where you wish to configure or modify specific settings in a host.json file for a specific environment, without changing the host.json file itself. You can override specific host.json values by creating an equivalent value as an application setting. When the runtime finds an application setting in the format `AzureFunctionsJobHost__path__to__setting`, it overrides the equivalent host.json setting located at `path.to.setting` in the JSON. When expressed as an application setting, the dot (`.`) used to indicate JSON hierarchy is replaced by a double underscore (`__`).
For example, say that you wanted to disable Application Insight sampling when running locally. If you changed the local host.json file to disable Application Insights, this change might get pushed to your production app during deployment. The safer way to do this is to instead create an application setting as `"AzureFunctionsJobHost__logging__applicationInsights__samplingSettings__isEnabled":"false"` in the `local.settings.json` file. You can see this in the following `local.settings.json` file, which doesn't get published:
azure-functions Security Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/security-concepts.md
By default, keys are stored in a Blob storage container in the account provided
|Location |Setting | Value | Description | |||||
-|Different storage account | `AzureWebJobsSecretStorageSas` | `<BLOB_SAS_URL` | Stores keys in Blob storage of a second storage account, based on the provided SAS URL. Keys are encrypted before being stored using a secret unique to your function app. |
+|Different storage account | `AzureWebJobsSecretStorageSas` | `<BLOB_SAS_URL>` | Stores keys in Blob storage of a second storage account, based on the provided SAS URL. Keys are encrypted before being stored using a secret unique to your function app. |
|File system | `AzureWebJobsSecretStorageType` | `files` | Keys are persisted on the file system, encrypted before storage using a secret unique to your function app. | |Azure Key Vault | `AzureWebJobsSecretStorageType`<br/>`AzureWebJobsSecretStorageKeyVaultName` | `keyvault`<br/>`<VAULT_NAME>` | The vault must have an access policy corresponding to the system-assigned managed identity of the hosting resource. The access policy should grant the identity the following secret permissions: `Get`,`Set`, `List`, and `Delete`. <br/>When running locally, the developer identity is used, and settings must be in the [local.settings.json file](functions-run-local.md#local-settings-file). | |Kubernetes Secrets |`AzureWebJobsSecretStorageType`<br/>`AzureWebJobsKubernetesSecretName` (optional) | `kubernetes`<br/>`<SECRETS_RESOURCE>` | Supported only when running the Functions runtime in Kubernetes. When `AzureWebJobsKubernetesSecretName` isn't set, the repository is considered read-only. In this case, the values must be generated before deployment. The Azure Functions Core Tools generates the values automatically when deploying to Kubernetes.|
Restricting network access to your function app lets you control who can access
### Set access restrictions
-Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more, see [Azure App Service Access Restrictions #](../app-service/app-service-ip-restrictions.md?toc=%2fazure%2fazure-functions%2ftoc.json).
+Access restrictions allow you to define lists of allow/deny rules to control traffic to your app. Rules are evaluated in priority order. If there are no rules defined, then your app will accept traffic from any address. To learn more, see [Azure App Service Access Restrictions](../app-service/app-service-ip-restrictions.md?toc=%2fazure%2fazure-functions%2ftoc.json).
### Private site access
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
ms.devlang: na
na Previously updated : 04/20/2021 Last updated : 06/25/2021 # Compare Azure Government and global Azure
-Microsoft Azure Government uses same underlying technologies as global Azure, which includes the core components of [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/). Both Azure and Azure Government have the same comprehensive security controls in place, as well as the same Microsoft commitment on the safeguarding of customer data. Whereas both cloud environments are assessed and authorized at the FedRAMP High impact level, Azure Government provides an additional layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening). These commitments may be of interest to customers using the cloud to store or process data subject to US export control regulations.
+Microsoft Azure Government uses same underlying technologies as global Azure, which includes the core components of [Infrastructure-as-a-Service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas/), [Platform-as-a-Service (PaaS)](https://azure.microsoft.com/overview/what-is-paas/), and [Software-as-a-Service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/). Both Azure and Azure Government have the same comprehensive security controls in place and the same Microsoft commitment on the safeguarding of customer data. Whereas both cloud environments are assessed and authorized at the FedRAMP High impact level, Azure Government provides an extra layer of protection to customers through contractual commitments regarding storage of customer data in the United States and limiting potential access to systems processing customer data to [screened US persons](./documentation-government-plan-security.md#screening). These commitments may be of interest to customers using the cloud to store or process data subject to US export control regulations.
## Export control implications
-Customers are responsible for designing and deploying their applications to meet [US export control requirements](./documentation-government-overview-itar.md) such as the requirements prescribed in the EAR, ITAR, and DoE 10 CFR Part 810. In doing so, customers should not include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).
+You are responsible for designing and deploying your applications to meet [US export control requirements](./documentation-government-overview-itar.md) such as the requirements prescribed in the EAR, ITAR, and DoE 10 CFR Part 810. In doing so, you should not include sensitive or restricted information in Azure resource names, as explained in [Considerations for naming Azure resources](./documentation-government-concept-naming-resources.md).
## Guidance for developers
-Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For detailed information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you will mostly have the same experience as in global Azure. Table below lists API endpoints in Azure vs. Azure Government for accessing and managing various services.
+Azure Government services operate the same way as the corresponding services in global Azure, which is why most of the existing online Azure documentation applies equally well to Azure Government. However, there are some key differences that developers working on applications hosted in Azure Government must be aware of. For more information, see [Guidance for developers](./documentation-government-developer-guide.md). As a developer, you must know how to connect to Azure Government and once you connect you will mostly have the same experience as in global Azure.
+
+You can use AzureCLI or PowerShell to obtain Azure Government endpoints for services you provisioned:
+
+- Use **Azure CLI** to run the [az cloud show](/cli/azure/cloud#az_cloud_show) command and provide `AzureUSGovernment` as the name of the target cloud environment. For example,
+
+ ```azurecli
+ az cloud show --name AzureUSGovernment
+ ```
+
+ should get you different endpoints for Azure Government.
+
+- Use a **PowerShell** cmdlet such as [Get-AzureEnvironment](/powershell/module/servicemanagement/azure.service/get-azureenvironment) (or [Get-AzureRmEnvironment](/powershell/module/azurerm.profile/get-azurermenvironment)) to get endpoints and metadata for an instance of Azure service. For example,
+
+ ```powershell
+ Get-AzureEnvironment -Name AzureUSGovernment
+ ```
+
+ should get you properties for Azure Government. These cmdlets get environments from your subscription data file.
+
+Table below lists API endpoints in Azure vs. Azure Government for accessing and managing some of the more common services. If you provisioned a service that isn't listed in the table below, see the Azure CLI and PowerShell examples above for suggestions on how to obtain the corresponding Azure Government endpoint.
+
+</br>
|Service category|Service name|Azure Public|Azure Government|Notes| |--|--|-|-|-|
Azure Government services operate the same way as the corresponding services in
||Queue|\*.queue.core.windows.net|\*.queue.core.usgovcloudapi.net|| ||Table|\*.table.core.windows.net|\*.table.core.usgovcloudapi.net|| ||File|\*.file.core.windows.net|\*.file.core.usgovcloudapi.net||
-|**Web**|API Management Gateway|\*.azure-api.net|\*.azure-api.us||
+|**Web**|API Management|management.azure.com|management.usgovcloudapi.net||
+||API Management Gateway|\*.azure-api.net|\*.azure-api.us||
||API Management Portal|\*.portal.azure-api.net|\*.portal.azure-api.us|| ||API Management management|\*.management.azure-api.net|\*.management.azure-api.us|| ||App Configuration|\*.azconfig.io|\*.azconfig.azure.us|Endpoint|
Azure Government services operate the same way as the corresponding services in
## Service availability
-Microsoft's goal is to enable 100% parity in service availability between Azure and Azure Government. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. If a service is available in Azure Government, that fact is not reiterated in the rest of this article. Instead, customers are encouraged to review [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia) for the latest, up-to-date information on service availability.
+Microsoft's goal for Azure Government is to match service availability in Azure. For service availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia). Services available in Azure Government are listed by category and whether they are Generally Available or available through Preview. If a service is available in Azure Government, that fact is not reiterated in the rest of this article. Instead, you are encouraged to review [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia) for the latest, up-to-date information on service availability.
-In general, service availability in Azure Government implies that all corresponding service features are available to customers. Variations to this approach and other applicable limitations are tracked and explained in this article based on the main service categories outlined in the [online directory of Azure services](https://azure.microsoft.com/services/). Additional considerations for service deployment and usage in Azure Government are also provided.
+In general, service availability in Azure Government implies that all corresponding service features are available to you. Variations to this approach and other applicable limitations are tracked and explained in this article based on the main service categories outlined in the [online directory of Azure services](https://azure.microsoft.com/services/). Other considerations for service deployment and usage in Azure Government are also provided.
## AI + Machine Learning
The following Speech service **features are not currently available** in Azure G
- Custom Voice
-See details of supported locales by features in [Speech service supported regions](../cognitive-services/speech-service/regions.md). For additional information including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/Speech-Service/sovereign-clouds.md).
+See details of supported locales by features in [Speech service supported regions](../cognitive-services/speech-service/regions.md). For more information including API endpoints, see [Speech service in sovereign clouds](../cognitive-services/Speech-Service/sovereign-clouds.md).
### [Translator](../cognitive-services/translator/translator-info-overview.md)
The following Functions **features are not currently available** in Azure Govern
When connecting your Functions app to Application Insights in Azure Government, make sure you use [`APPLICATIONINSIGHTS_CONNECTION_STRING`](../azure-functions/functions-app-settings.md#applicationinsights_connection_string), which lets you customize the Application Insights endpoint.
+## Containers
+
+This section outlines variations and considerations when using Container services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=openshift,app-service-linux,container-registry,container-instances,kubernetes-service&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+
+### [Azure Kubernetes Service](../aks/intro-kubernetes.md)
+
+The following Azure Kubernetes Service **features are not currently available** in Azure Government:
+
+- [Customize node configuration](../aks/custom-node-configuration.md) for Azure Kubernetes Service node pools
++ ## Databases This section outlines variations and considerations when using Databases services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir,data-factory,sql-server-stretch-database,redis-cache,database-migration,synapse-analytics,postgresql,mariadb,mysql,sql-database,cosmos-db&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
The following Azure Database for PostgreSQL **features are not currently availab
- Hyperscale (Citus) and Flexible server deployment options - The following features of the Single server deployment option - Advanced Threat Protection
+ - Backup with long-term retention
+
+### [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md)
+
+The following Azure SQL Managed Instance **features are not currently available** in Azure Government:
+
+- Long-term retention
## Developer Tools
This section describes the supplemental configuration that is required to use Ap
**Enable Application Insights for [ASP.NET](#web) & [ASP.NET Core](#web) with Visual Studio**
-Azure Government customers can enable Application Insights with a [codeless agent](../azure-monitor/app/azure-web-apps.md) for their Azure App Services hosted applications or via the traditional **Add Applications Insights Telemetry** button in Visual Studio, which requires a small manual workaround. Customers experiencing the associated issue may see error messages like "There is no Azure subscription associated with this account" or "The selected subscription does not support Application Insights" even though the `microsoft.insights` resource provider has a status of registered for the subscription. To mitigate this issue, you must perform the following steps:
+In Azure Government, you can enable Application Insights with a [codeless agent](../azure-monitor/app/azure-web-apps.md) for your Azure App Services hosted applications or via the traditional **Add Applications Insights Telemetry** button in Visual Studio, which requires a small manual workaround. If you are experiencing the associated issue, you may see error messages like "There is no Azure subscription associated with this account" or "The selected subscription does not support Application Insights" even though the `microsoft.insights` resource provider has a status of registered for the subscription. To mitigate this issue, you must perform the following steps:
1. Switch Visual Studio to [target the Azure Government cloud](./documentation-government-welcome.md). 2. Create (or if already existing, set) the User Environment variable for `AzureGraphApiVersion` as follows:
Azure Government customers can enable Application Insights with a [codeless agen
- Variable name: `AzureGraphApiVersion` - Variable value: `2014-04-01`
- To create a User Environment variable go to **Control Panel > System > Advanced system settings > Advanced > Environment Variables**.
+ To create a User Environment variable, go to **Control Panel > System > Advanced system settings > Advanced > Environment Variables**.
3. Make the appropriate Application Insights SDK endpoint modifications for either [ASP.NET](#web) or [ASP.NET Core](#web) depending on your project type.
-**Snapshot Debugger** is now available for Azure Government customers. To use Snapshot Debugger the only additional prerequisite is to ensure that you are using [Snapshot Collector version 1.3.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.5-pre-1906.403) or later. Then simply follow the standard [Snapshot Debugger documentation](../azure-monitor/app/snapshot-debugger.md).
+**Snapshot Debugger** is now available for Azure Government customers. To use Snapshot Debugger, the only other prerequisite is to ensure that you are using [Snapshot Collector version 1.3.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.3.5-pre-1906.403) or later. Then follow the standard [Snapshot Debugger documentation](../azure-monitor/app/snapshot-debugger.md).
**SDK endpoint modifications** - In order to send data from Application Insights to the Azure Government region, you will need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications, as described in [Application Insights overriding default endpoints](../azure-monitor/app/custom-endpoints.md).
You need to open some **outgoing ports** in your server's firewall to allow the
|-||-|--| |Telemetry|dc.applicationinsights.us|23.97.4.113|443|
+### [Azure Advisor](../advisor/advisor-overview.md)
+
+The following Azure Advisor recommendation **features are not currently available** in Azure Government:
+
+- High Availability
+ - Configure your VPN gateway to active-active for connection resilience
+ - Create Azure Service Health alerts to be notified when Azure issues affect you
+ - Configure Traffic Manager endpoints for resiliency
+ - Use soft delete for your Azure Storage Account
+- Performance
+ - Improve App Service performance and reliability
+ - Reduce DNS time to live on your Traffic Manager profile to fail over to healthy endpoints faster
+ - Improve Azure Synapse Analytics performance
+ - Use Premium Storage
+ - Migrate your Storage Account to Azure Resource Manager
+- Cost
+ - Buy reserved virtual machines instances to save money over pay-as-you-go costs
+ - Eliminate unprovisioned ExpressRoute circuits
+ - Delete or reconfigure idle virtual network gateways
+
+The calculation for recommending that you should right-size or shut down underutilized virtual machines in Azure Government is as follows:
+
+- Advisor monitors your virtual machine usage for seven days and identifies low-utilization virtual machines.
+- Virtual machines are considered low utilization if their CPU utilization is 5% or less and their network utilization is less than 2%, or if the current workload can be accommodated by a smaller virtual machine size.
+
+If you want to be more aggressive at identifying underutilized virtual machines, you can adjust the CPU utilization rule on a per subscription basis.
+
+### [Azure Cost Management and Billing](../cost-management-billing/cost-management-billing-overview.md)
+
+The following Azure Cost Management + Billing **features are not currently available** in Azure Government:
+
+- Cost Management + Billing for cloud solution providers (CSPs)
+ ### [Azure Lighthouse](../lighthouse/overview.md) The following Azure Lighthouse **features are not currently available** in Azure Government:+ - Managed Service offers published to Azure Marketplace ### [Azure Monitor](../azure-monitor/logs/data-platform-logs.md)
The following Azure Monitor **features behave differently** in Azure Government:
- Can I switch between Azure and Azure Government workspaces from the Operations Management Suite portal? - No. The portals for Azure and Azure Government are separate and do not share information.
-### [Azure Advisor](../advisor/advisor-overview.md)
-
-The following Azure Advisor recommendation **features are not currently available** in Azure Government:
--- High Availability
- - Configure your VPN gateway to active-active for connection resilience
- - Create Azure Service Health alerts to be notified when Azure issues affect you
- - Configure Traffic Manager endpoints for resiliency
- - Use soft delete for your Azure Storage Account
-- Performance
- - Improve App Service performance and reliability
- - Reduce DNS time to live on your Traffic Manager profile to fail over to healthy endpoints faster
- - Improve Azure Synapse Analytics performance
- - Use Premium Storage
- - Migrate your Storage Account to Azure Resource Manager
-- Cost
- - Buy reserved virtual machines instances to save money over pay-as-you-go costs
- - Eliminate unprovisioned ExpressRoute circuits
- - Delete or reconfigure idle virtual network gateways
-
-The calculation for recommending that you should right-size or shut down underutilized virtual machines in Azure Government is as follows:
--- Advisor monitors your virtual machine usage for 7 days and identifies low-utilization virtual machines.-- Virtual machines are considered low utilization if their CPU utilization is 5% or less and their network utilization is less than 2%, or if the current workload can be accommodated by a smaller virtual machine size.-
-If you want to be more aggressive at identifying underutilized virtual machines, you can adjust the CPU utilization rule on a per subscription basis.
- ## Media
This section outlines variations and considerations when using Migration service
The following Azure Migrate **features are not currently available** in Azure Government: -- Dependency visualization functionality as Azure Migrate depends on Service Map for dependency visualization which is currently unavailable in Azure Government.
+- Dependency visualization functionality as Azure Migrate depends on Service Map for dependency visualization, which is currently unavailable in Azure Government.
- You can only create assessments for Azure Government as target regions and using Azure Government offers.
This section outlines variations and considerations when using Networking servic
### [Azure ExpressRoute](../expressroute/index.yml)
-Azure ExpressRoute is used to create private connections between Azure Government datacenters and customer's on-premises infrastructure or a colocation facility. ExpressRoute connections do not go over the public Internet ΓÇö they offer optimized pathways (shortest hops, lowest latency, highest performance, etc.) and Azure Government geo-redundant regions.
+Azure ExpressRoute is used to create private connections between Azure Government datacenters and your on-premises infrastructure or a colocation facility. ExpressRoute connections do not go over the public InternetΓÇöthey offer optimized pathways (shortest hops, lowest latency, highest performance, and so on) to Azure Government geo-redundant regions.
- By default, all Azure Government ExpressRoute connectivity is configured active-active redundant with support for bursting, and it delivers up to 10 G circuit capacity (smallest is 50 MB). - Microsoft owns and operates all fiber infrastructure between Azure Government regions and Azure Government ExpressRoute Meet-Me locations. - Azure Government ExpressRoute provides connectivity to Microsoft Azure, Microsoft 365, and Dynamics 365 cloud services.
-Aside from ExpressRoute, customers can also use an [IPSec protected VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) (site-to-site for a typical organization) to connect securely from their on-premises infrastructure to Azure Government. For network services to support Azure Government customer applications and solutions, it is strongly recommended that ExpressRoute (private connectivity) is implemented to connect to Azure Government. If VPN connections are used, the following should be considered:
+Aside from ExpressRoute, you can also use an [IPSec protected VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) (site-to-site for a typical organization) to connect securely from your on-premises infrastructure to Azure Government. For network services to support Azure Government customer applications and solutions, it is recommended that ExpressRoute (private connectivity) is implemented to connect to Azure Government. If you use VPN connections, you should consider the following recommendations:
-- Customers should contact their authorizing official/agency to determine whether private connectivity or other secure connection mechanism is required and to identify any additional restrictions to consider.-- Customers should decide whether to mandate that the site-to-site VPN is routed through a private connectivity zone.-- Customers should obtain either a Multi-Protocol Label Switching (MPLS) circuit or VPN with a licensed private connectivity access provider.
+- You should contact your authorizing official/agency to determine whether private connectivity or other secure connection mechanism is required and to identify any extra restrictions to consider.
+- You should decide whether to mandate that the site-to-site VPN is routed through a private connectivity zone.
+- You should obtain either a Multi-Protocol Label Switching (MPLS) circuit or VPN with a licensed private connectivity access provider.
-All customers who utilize a private connectivity architecture should validate that an appropriate implementation is established and maintained for the customer connection to the Gateway Network/Internet (GN/I) edge router demarcation point for Azure Government. Similarly, your organization must establish network connectivity between your on-premises environment and Gateway Network/Customer (GN/C) edge router demarcation point for Azure Government.
+If you utilize a private connectivity architecture, you should validate that an appropriate implementation is established and maintained for your connection to the Gateway Network/Internet (GN/I) edge router demarcation point for Azure Government. Similarly, your organization must establish network connectivity between your on-premises environment and Gateway Network/Customer (GN/C) edge router demarcation point for Azure Government.
-### BGP communities
+#### BGP communities
This section provides an overview of how BGP communities are used with ExpressRoute in Azure Government. Microsoft advertises routes in the public peering and Microsoft peering paths, with routes tagged with appropriate community values. The rationale for doing so and the details on community values are described below.
-If you are connecting to Microsoft through ExpressRoute at any one peering location within the Azure Government region, you will have access to all Microsoft cloud services across all regions within the government boundary. For example, if you connected to Microsoft in Washington D.C. through ExpressRoute, you would have access to all Microsoft cloud services hosted in Azure Government. [ExpressRoute overview](../expressroute/expressroute-introduction.md) provides details on locations and partners, as well as a list of peering locations for Azure Government.
+If you are connecting to Microsoft through ExpressRoute at any one peering location within the Azure Government region, you will have access to all Microsoft cloud services across all regions within the government boundary. For example, if you connected to Microsoft in Washington D.C. through ExpressRoute, you would have access to all Microsoft cloud services hosted in Azure Government. [ExpressRoute overview](../expressroute/expressroute-introduction.md) provides details on locations and partners and a list of peering locations for Azure Government.
-You can purchase more than one ExpressRoute circuit. Having multiple connections offers you significant benefits on high availability due to geo-redundancy. In cases where you have multiple ExpressRoute circuits, you will receive the same set of prefixes advertised from Microsoft on the public peering and Microsoft peering paths. This means you will have multiple paths from your network into Microsoft. This can potentially cause sub-optimal routing decisions to be made within your network. As a result, you may experience sub-optimal connectivity experiences to different services.
+You can purchase more than one ExpressRoute circuit. Having multiple connections offers you significant benefits on high availability due to geo-redundancy. In cases where you have multiple ExpressRoute circuits, you will receive the same set of prefixes advertised from Microsoft on the public peering and Microsoft peering paths. This arrangement means you will have multiple paths from your network into Microsoft, which can potentially cause suboptimal routing decisions to be made within your network. As a result, you may experience suboptimal connectivity experiences to different services.
-Microsoft tags prefixes advertised through public peering and Microsoft peering with appropriate BGP community values indicating the region the prefixes are hosted in. You can rely on the community values to make appropriate routing decisions to offer optimal routing to customers. For additional details, see [Optimize ExpressRoute routing](../expressroute/expressroute-optimize-routing.md).
+Microsoft tags prefixes advertised through public peering and Microsoft peering with appropriate BGP community values indicating the region the prefixes are hosted in. You can rely on the community values to make appropriate routing decisions to offer optimal routing to customers. For more information, see [Optimize ExpressRoute routing](../expressroute/expressroute-optimize-routing.md).
|Azure Government region|BGP community value| |--|-|
In addition to the above, Microsoft also tags prefixes based on the service they
>[!NOTE] >Microsoft does not honor any BGP community values that you set on the routes advertised to Microsoft.
+### [Azure Private Link](../private-link/private-link-overview.md)
+
+For Private Link services availability, see [Azure Private Link availability](../private-link/availability.md).
+ ### [Traffic Manager](../traffic-manager/traffic-manager-overview.md) Traffic Manager health checks can originate from certain IP addresses for Azure Government. Review the [IP addresses in the JSON file](https://azuretrafficmanagerdata.blob.core.windows.net/probes/azure-gov/probe-ip-ranges.json) to ensure that incoming connections from these IP addresses are allowed at the endpoints to check its health status.
This section outlines variations and considerations when using Security services
The following features have known limitations in Azure Government: - Limitations with B2B Collaboration in supported Azure US Government tenants:
- - B2B Collaboration is available in most Azure US Government tenants created after June, 2019. Over time, more tenants will get access to this functionality. See [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../active-directory/external-identities/current-limitations.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant)
+ - B2B Collaboration is available in most Azure US Government tenants created after June 2019. Over time, more tenants will get access to this functionality. See [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../active-directory/external-identities/current-limitations.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant)
- B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user will be unable to redeem the invitation. - B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error. - Microsoft 365 Groups are not supported for B2B users and can't be enabled. - Some SQL tools such as SQL Server Management Studio (SSMS) require you to set the appropriate cloud parameter. In the tool's Azure service setup options, set the cloud parameter to Azure Government. -- Limitations with multi-factor authentication:
+- Limitations with multifactor authentication:
- Hardware OATH tokens are not available in Azure Government.
- - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multi-factor authentication should and should not be required based off the user's current IP address.
+ - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address.
-- Limitations with Azure AD Join:
+- Limitations with Azure AD join:
- Enterprise state roaming for Windows 10 devices is not available - Limitations with Azure AD self-service password reset (SSPR):
The following Azure Security Center **features are not currently available** in
**Azure Security Center FAQ**
-For Azure Security Center FAQ, see [Azure Security Center frequently asked questions public documentation](../security-center/faq-general.yml). Additional FAQ for Azure Security Center in Azure Government are listed below.
+For Azure Security Center FAQ, see [Azure Security Center frequently asked questions public documentation](../security-center/faq-general.yml). Extra FAQ for Azure Security Center in Azure Government is listed below.
**What will customers be charged for Azure Security Center in Azure Government?**</br> Azure Security Center's integrated cloud workload protection platform (CWPP), Azure Defender, brings advanced, intelligent, protection of your Azure and hybrid resources and workloads. Azure Defender is free for the first 30 days. Should you choose to continue to use public preview or generally available features of Azure Defender beyond 30 days, we automatically start to charge for the service. **Is Azure Security Center available for DoD customers?**</br>
-Azure Security Center is deployed in Azure Government regions but not in Azure Government for DoD regions. Azure resources created in DoD regions can still utilize Security Center capabilities. However, using it will result in Security Center collected data being moved out from DoD regions and stored in Azure Government regions. By default, all Security Center features which collect and store data are disabled for resources hosted in DoD regions. The type of data collected and stored varies depending on the selected feature. Customers who want to enable Azure Security Center features for DoD resources are advised to consider data separation and protection requirements before doing so.
+Azure Security Center is deployed in Azure Government regions but not in Azure Government for DoD regions. Azure resources created in DoD regions can still utilize Security Center capabilities. However, using it will result in Security Center collected data being moved out from DoD regions and stored in Azure Government regions. By default, all Security Center features that collect and store data are disabled for resources hosted in DoD regions. The type of data collected and stored varies depending on the selected feature. If you want to enable Azure Security Center features for DoD resources, you are advised to consider data separation and protection requirements before doing so.
### [Azure Sentinel](../sentinel/overview.md)
For feature variations and limitations, see [Cloud feature availability for US G
This section outlines variations and considerations when using Storage services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,backup,storage&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-iowa,usgov-texas,usgov-virginia).
+### [Azure Backup](../backup/backup-overview.md)
+
+The following Azure Backup **features are not currently available** in Azure Government:
+
+- Azure Disk Backup, as documented in [Azure Disk Backup support matrix](../backup/disk-backup-support-matrix.md).
+
+### [Azure managed disks](../virtual-machines/managed-disks-overview.md)
+
+The following Azure managed disks **features are not currently available** in Azure Government:
+
+- Zone-redundant storage (ZRS)
+ ### [Azure Storage](../storage/index.yml) For a Quickstart that will help you get started with Storage in Azure Government, see [Develop with Storage API on Azure Government](./documentation-government-get-started-connect-to-storage.md).
Azure relies on [paired regions](../best-practices-availability-paired-regions.m
|US Government|US Gov Arizona|US Gov Texas| |US Government|US Gov Virginia|US Gov Texas|
-Table in Guidance for developers section shows URL endpoints for main Azure Storage services.
+Table in [Guidance for developers](#guidance-for-developers) section shows URL endpoints for main Azure Storage services.
> [!NOTE] > All your scripts and code need to account for the appropriate endpoints. See [**Configure Azure Storage Connection Strings**](../storage/common/storage-configure-connection-string.md).
When you're deploying the **StorSimple** Manager service, use the [https://porta
With Import/Export jobs for US Gov Arizona or US Gov Texas, the mailing address is for US Gov Virginia. The data is loaded into selected storage accounts from the US Gov Virginia region.
-For DoD IL5 data, use a DoD region storage account to ensure that data is loaded directly into the DoD regions. For more information, see [Azure Import/Export IL5 isolation guidance](./documentation-government-impact-level-5.md#azure-importexport-service).
- For all jobs, we recommend that you rotate your storage account keys after the job is complete to remove any access granted during the process. For more information, see [Manage storage account access keys](../storage/common/storage-account-keys-manage.md).
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-android-sdk.md
+
+ Title: Clustering point data in the Android SDK | Microsoft Azure Maps
+description: Learn how to cluster point data on maps. See how to use the Azure Maps Android SDK to cluster data, react to cluster mouse events, and display cluster aggregates.
++ Last updated : 03/23/2021++++
+zone_pivot_groups: azure-maps-android
++
+# Clustering point data in the Android SDK
+
+When visualizing many data points on the map, data points may overlap over each other. The overlap may cause the map may become unreadable and difficult to use. Clustering point data is the process of combining point data that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. When you work with large number of data points, use the clustering processes to improve your user experience.
+
+</br>
+
+>[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Clustering-point-data-in-Azure-Maps/player?format=ny]
+
+## Prerequisites
+
+Be sure to complete the steps in the [Quickstart: Create an Android app](quick-android-map.md) document. Code blocks in this article can be inserted into the maps `onReady` event handler.
+
+## Enabling clustering on a data source
+
+Enable clustering in the `DataSource` class by setting the `cluster` option to `true`. Set `clusterRadius` to select nearby points and combines them into a cluster. The value of `clusterRadius` is in pixels. Use `clusterMaxZoom` to specify a zoom level at which to disable the clustering logic. Here is an example of how to enable clustering in a data source.
++
+``` java
+//Create a data source and enable clustering.
+DataSource source = new DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(45),
+
+ //The maximum zoom level in which clustering occurs.
+ //If you zoom in more than this, all points are rendered as symbols.
+ clusterMaxZoom(15)
+);
+```
+++
+```kotlin
+ //Create a data source and enable clustering.
+val source = DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(45),
+
+ //The maximum zoom level in which clustering occurs.
+ //If you zoom in more than this, all points are rendered as symbols.
+ clusterMaxZoom(15)
+)
+```
++
+> [!CAUTION]
+> Clustering only works with `Point` features. If your data source contains features of other geometry types, such as `LineString` or `Polygon`, an error will occur.
+
+> [!TIP]
+> If two data points are close together on the ground, it's possible the cluster will never break apart, no matter how close the user zooms in. To address this, you can set the `clusterMaxZoom` option to disable the clustering logic and simply display everything.
+
+The `DataSource` class provides the following methods related to clustering as well.
+
+| Method | Return type | Description |
+|--|-|-|
+| `getClusterChildren(Feature clusterFeature)` | `FeatureCollection` | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters will be features with properties matching ClusteredProperties. |
+| `getClusterExpansionZoom(Feature clusterFeature)` | `int` | Calculates a zoom level at which the cluster will start expanding or break apart. |
+| `getClusterLeaves(Feature clusterFeature, long limit, long offset)` | `FeatureCollection` | Retrieves all points in a cluster. Set the `limit` to return a subset of the points, and use the `offset` to page through the points. |
+
+## Display clusters using a bubble layer
+
+A bubble layer is a great way to render clustered points. Use expressions to scale the radius and change the color based on the number of points in the cluster. If you display clusters using a bubble layer, then you should use a separate layer to render unclustered data points.
+
+To display the size of the cluster on top of the bubble, use a symbol layer with text, and don't use an icon.
+
+The following code displays clustered points using a bubble layer, and the number of points in each cluster using a symbol layer. A second symbol layer is used to display individual points that are not within a cluster.
++
+``` java
+//Create a data source and add it to the map.
+DataSource source = new DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(45),
+
+ //The maximum zoom level in which clustering occurs.
+ //If you zoom in more than this, all points are rendered as symbols.
+ clusterMaxZoom(15)
+);
+
+//Import the geojson data and add it to the data source.
+map.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson");
+
+//Add data source to the map.
+map.sources.add(source);
+
+//Create a bubble layer for rendering clustered data points.
+map.layers.add(new BubbleLayer(source,
+ //Scale the size of the clustered bubble based on the number of points in the cluster.
+ bubbleRadius(
+ step(
+ get("point_count"),
+ 20, //Default of 20 pixel radius.
+ stop(100, 30), //If point_count >= 100, radius is 30 pixels.
+ stop(750, 40) //If point_count >= 750, radius is 40 pixels.
+ )
+ ),
+
+ //Change the color of the cluster based on the value on the point_cluster property of the cluster.
+ bubbleColor(
+ step(
+ toNumber(get("point_count")),
+ color(Color.GREEN), //Default to lime green.
+ stop(100, color(Color.YELLOW)), //If the point_count >= 100, color is yellow.
+ stop(750, color(Color.RED)) //If the point_count >= 100, color is red.
+ )
+ ),
+
+ bubbleStrokeWidth(0f),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ BubbleLayerOptions.filter(has("point_count"))
+));
+
+//Create a symbol layer to render the count of locations in a cluster.
+map.layers.add(new SymbolLayer(source,
+ iconImage("none"), //Hide the icon image.
+ textField(get("point_count")), //Display the point count as text.
+ textOffset(new Float[]{ 0f, 0.4f }),
+
+ //Allow clustered points in this layer.
+ SymbolLayerOptions.filter(has("point_count"))
+));
+
+//Create a layer to render the individual locations.
+map.layers.add(new SymbolLayer(source,
+ //Filter out clustered points from this layer.
+ SymbolLayerOptions.filter(not(has("point_count")))
+));
+```
+++
+```kotlin
+//Create a data source and add it to the map.
+val source = DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(45),
+
+ //The maximum zoom level in which clustering occurs.
+ //If you zoom in more than this, all points are rendered as symbols.
+ clusterMaxZoom(15)
+)
+
+//Import the geojson data and add it to the data source.
+map.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson")
+
+//Add data source to the map.
+map.sources.add(source)
+
+//Create a bubble layer for rendering clustered data points.
+map.layers.add(
+ BubbleLayer(
+ source,
+
+ //Scale the size of the clustered bubble based on the number of points in the cluster.
+ bubbleRadius(
+ step(
+ get("point_count"),
+ 20, //Default of 20 pixel radius.
+ stop(100, 30), //If point_count >= 100, radius is 30 pixels.
+ stop(750, 40) //If point_count >= 750, radius is 40 pixels.
+ )
+ ),
+
+ //Change the color of the cluster based on the value on the point_cluster property of the cluster.
+ bubbleColor(
+ step(
+ toNumber(get("point_count")),
+ color(Color.GREEN), //Default to lime green.
+ stop(100, color(Color.YELLOW)), //If the point_count >= 100, color is yellow.
+ stop(750, color(Color.RED)) //If the point_count >= 100, color is red.
+ )
+ ),
+ bubbleStrokeWidth(0f),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ BubbleLayerOptions.filter(has("point_count"))
+ )
+)
+
+//Create a symbol layer to render the count of locations in a cluster.
+map.layers.add(
+ SymbolLayer(
+ source,
+ iconImage("none"), //Hide the icon image.
+ textField(get("point_count")), //Display the point count as text.
+ textOffset(arrayOf(0f, 0.4f)),
+
+ //Allow clustered points in this layer.
+ SymbolLayerOptions.filter(has("point_count"))
+ )
+)
+
+//Create a layer to render the individual locations.
+map.layers.add(
+ SymbolLayer(
+ source,
+
+ //Filter out clustered points from this layer.
+ SymbolLayerOptions.filter(not(has("point_count")))
+ )
+)
+```
++
+The following image shows the above code display clustered point features in a bubble layer, scaled and colored based on the number of points in the cluster. Unclustered points are rendered using a symbol layer.
+
+![Map clustered locations breaking apart while zooming the map in](media/clustering-point-data-android-sdk/android-cluster-bubble-layer.gif)
+
+## Display clusters using a symbol layer
+
+When visualizing data points, the symbol layer automatically hides symbols that overlap each other to ensure a cleaner user interface. This default behavior might be undesirable if you want to show the data points density on the map. However, these settings can be changed. To display all symbols, set the `iconAllowOverlap` option of the Symbol layer to `true`.
+
+Use clustering to show the data points density while keeping a clean user interface. The sample below shows you how to add custom symbols and represent clusters and individual data points using the symbol layer.
++
+``` java
+//Load all the custom image icons into the map resources.
+map.images.add("earthquake_icon", R.drawable.earthquake_icon);
+map.images.add("warning_triangle_icon", R.drawable.warning_triangle_icon);
+
+//Create a data source and add it to the map.
+DataSource source = new DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true)
+);
+
+//Import the geojson data and add it to the data source.
+map.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson");
+
+//Add data source to the map.
+map.sources.add(source);
+
+//Create a symbol layer to render the clusters.
+map.layers.add(new SymbolLayer(source,
+ iconImage("warning_triangle_icon"),
+ textField(get("point_count")),
+ textOffset(new Float[]{ 0f, -0.4f }),
+
+ //Allow clustered points in this layer.
+ filter(has("point_count"))
+));
+
+//Create a layer to render the individual locations.
+map.layers.add(new SymbolLayer(source,
+ iconImage("earthquake_icon"),
+
+ //Filter out clustered points from this layer.
+ filter(not(has("point_count")))
+));
+```
+++
+```kotlin
+//Load all the custom image icons into the map resources.
+map.images.add("earthquake_icon", R.drawable.earthquake_icon)
+map.images.add("warning_triangle_icon", R.drawable.warning_triangle_icon)
+
+//Create a data source and add it to the map.
+val source = DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true)
+)
+
+//Import the geojson data and add it to the data source.
+map.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson")
+
+//Add data source to the map.
+map.sources.add(source)
+
+//Create a symbol layer to render the clusters.
+map.layers.add(
+ SymbolLayer(
+ source,
+ iconImage("warning_triangle_icon"),
+ textField(get("point_count")),
+ textOffset(arrayOf(0f, -0.4f)),
+
+ //Allow clustered points in this layer.
+ filter(has("point_count"))
+ )
+)
+
+//Create a layer to render the individual locations.
+map.layers.add(
+ SymbolLayer(
+ source,
+ iconImage("earthquake_icon"),
+
+ //Filter out clustered points from this layer.
+ filter(not(has("point_count")))
+ )
+)
+```
++
+For this sample, the following image is loaded into the drawable folder of the app.
+
+| ![Earthquake icon image](media/clustering-point-data-android-sdk/earthquake-icon.png) | ![Weather icon image of rain showers](media/clustering-point-data-android-sdk/warning-triangle-icon.png) |
+|:--:|:--:|
+| earthquake_icon.png | warning_triangle_icon.png |
+
+The following image shows the above code rendering clustered and unclustered point features using custom icons.
+
+![Map of clustered points rendered using a symbol layer](media/clustering-point-data-android-sdk/android-cluster-symbol-layer.gif)
+
+## Clustering and the heat maps layer
+
+Heat maps are a great way to display the density of data on the map. This visualization method can handle a large number of data points on its own. If the data points are clustered and the cluster size is used as the weight of the heat map, then the heat map can handle even more data. To achieve this option, set the `heatmapWeight` option of the heat map layer to `get("point_count")`. When the cluster radius is small, the heat map will look nearly identical to a heat map using the unclustered data points, but it will perform much better. However, the smaller the cluster radius, the more accurate the heat map will be, but with fewer performance benefits.
++
+``` java
+//Create a data source and add it to the map.
+DataSource source = new DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(10)
+);
+
+//Import the geojson data and add it to the data source.
+map.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson");
+
+//Add data source to the map.
+map.sources.add(source);
+
+//Create a heat map and add it to the map.
+map.layers.add(new HeatMapLayer(source,
+ //Set the weight to the point_count property of the data points.
+ heatmapWeight(get("point_count")),
+
+ //Optionally adjust the radius of each heat point.
+ heatmapRadius(20f)
+), "labels");
+```
+++
+```kotlin
+//Create a data source and add it to the map.
+val source = DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(10)
+)
+
+//Import the geojson data and add it to the data source.
+map.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson")
+
+//Add data source to the map.
+map.sources.add(source)
+
+//Create a heat map and add it to the map.
+map.layers.add(
+ HeatMapLayer(
+ source,
+
+ //Set the weight to the point_count property of the data points.
+ heatmapWeight(get("point_count")),
+
+ //Optionally adjust the radius of each heat point.
+ heatmapRadius(20f)
+ ), "labels"
+)
+```
++
+The following image shows the above code display a heat map that is optimized by using clustered point features and the cluster count as the weight in the heat map.
+
+![Map of a heatmap optimized using clustered points as a weight](media/clustering-point-data-android-sdk/android-cluster-heat-map.gif)
+
+## Mouse events on clustered data points
+
+When mouse events occur on a layer that contains clustered data points, the clustered data point return to the event as a GeoJSON point feature object. This point feature will have the following properties:
+
+| Property name | Type | Description |
+||||
+| `cluster` | boolean | Indicates if feature represents a cluster. |
+| `point_count` | number | The number of points the cluster contains. |
+| `point_count` | number | The number of points the cluster contains. |
+| `point_count_abbreviated` | string | A string that abbreviates the `point_count` value if it's long. (for example, 4,000 becomes 4K) |
+
+This example takes a bubble layer that renders cluster points and adds a click event. When the click event triggers, the code calculates and zooms the map to the next zoom level, at which the cluster breaks apart. This functionality is implemented using the `getClusterExpansionZoom` method of the `DataSource` class and the `cluster_id` property of the clicked clustered data point.
++
+``` java
+//Create a data source and add it to the map.
+DataSource source = new DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(45),
+
+ //The maximum zoom level in which clustering occurs.
+ //If you zoom in more than this, all points are rendered as symbols.
+ clusterMaxZoom(15)
+);
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson");
+
+//Add data source to the map.
+map.sources.add(source);
+
+//Create a bubble layer for rendering clustered data points.
+BubbleLayer clusterBubbleLayer = new BubbleLayer(source,
+ //Scale the size of the clustered bubble based on the number of points in the cluster.
+ bubbleRadius(
+ step(
+ get("point_count"),
+ 20f, //Default of 20 pixel radius.
+ stop(100, 30), //If point_count >= 100, radius is 30 pixels.
+ stop(750, 40) //If point_count >= 750, radius is 40 pixels.
+ )
+ ),
+
+ //Change the color of the cluster based on the value on the point_cluster property of the cluster.
+ bubbleColor(
+ step(
+ get("point_count"),
+ color(Color.GREEN), //Default to green.
+ stop(100, color(Color.YELLOW)), //If the point_count >= 100, color is yellow.
+ stop(750, color(Color.RED)) //If the point_count >= 100, color is red.
+ )
+ ),
+
+ bubbleStrokeWidth(0f),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ BubbleLayerOptions.filter(has("point_count"))
+);
+
+//Add the clusterBubbleLayer and two additional layers to the map.
+map.layers.add(clusterBubbleLayer);
+
+//Create a symbol layer to render the count of locations in a cluster.
+map.layers.add(new SymbolLayer(source,
+ //Hide the icon image.
+ iconImage("none"),
+
+ //Display the 'point_count_abbreviated' property value.
+ textField(get("point_count_abbreviated")),
+
+ //Offset the text position so that it's centered nicely.
+ textOffset(new Float[] { 0f, 0.4f }),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ SymbolLayerOptions.filter(has("point_count"))
+));
+
+//Create a layer to render the individual locations.
+map.layers.add(new SymbolLayer(source,
+ //Filter out clustered points from this layer.
+ SymbolLayerOptions.filter(not(has("point_count")))
+));
+
+//Add a click event to the cluster layer so we can zoom in when a user clicks a cluster.
+map.events.add((OnFeatureClick) (features) -> {
+ if(features.size() > 0) {
+ //Get the clustered point from the event.
+ Feature cluster = features.get(0);
+
+ //Get the cluster expansion zoom level. This is the zoom level at which the cluster starts to break apart.
+ int expansionZoom = source.getClusterExpansionZoom(cluster);
+
+ //Update the map camera to be centered over the cluster.
+ map.setCamera(
+ //Center the map over the cluster points location.
+ center((Point)cluster.geometry()),
+
+ //Zoom to the clusters expansion zoom level.
+ zoom(expansionZoom),
+
+ //Animate the movement of the camera to the new position.
+ animationType(AnimationType.EASE),
+ animationDuration(200)
+ );
+ }
+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return true;
+}, clusterBubbleLayer);
+```
+++
+```kotlin
+//Create a data source and add it to the map.
+val source = DataSource( //Tell the data source to cluster point data.
+ //The radius in pixels to cluster points together.
+ cluster(true),
+
+ //The maximum zoom level in which clustering occurs.
+ clusterRadius(45),
+
+ //If you zoom in more than this, all points are rendered as symbols.
+ clusterMaxZoom(15)
+)
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson")
+
+//Add data source to the map.
+map.sources.add(source)
+
+//Create a bubble layer for rendering clustered data points.
+val clusterBubbleLayer = BubbleLayer(
+ source,
+
+ //Scale the size of the clustered bubble based on the number of points in the cluster.
+ bubbleRadius(
+ step(
+ get("point_count"),
+ 20f, //Default of 20 pixel radius.
+ stop(100, 30), //If point_count >= 100, radius is 30 pixels.
+ stop(750, 40) //If point_count >= 750, radius is 40 pixels.
+ )
+ ),
+
+ //Change the color of the cluster based on the value on the point_cluster property of the cluster.
+ bubbleColor(
+ step(
+ get("point_count"),
+ color(Color.GREEN), //Default to green.
+ stop(
+ 100,
+ color(Color.YELLOW)
+ ), //If the point_count >= 100, color is yellow.
+ stop(750, color(Color.RED)) //If the point_count >= 100, color is red.
+ )
+ ),
+
+ bubbleStrokeWidth(0f),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ BubbleLayerOptions.filter(has("point_count"))
+)
+
+//Add the clusterBubbleLayer and two additional layers to the map.
+map.layers.add(clusterBubbleLayer)
+
+//Create a symbol layer to render the count of locations in a cluster.
+map.layers.add(
+ SymbolLayer(
+ source,
+
+ //Hide the icon image.
+ iconImage("none"),
+
+ //Display the 'point_count_abbreviated' property value.
+ textField(get("point_count_abbreviated")),
+
+ //Offset the text position so that it's centered nicely.
+ textOffset(
+ arrayOf(
+ 0f,
+ 0.4f
+ )
+ ),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ SymbolLayerOptions.filter(has("point_count"))
+ )
+)
+
+//Create a layer to render the individual locations.
+map.layers.add(
+ SymbolLayer(
+ source,
+
+ //Filter out clustered points from this layer.
+ SymbolLayerOptions.filter(not(has("point_count")))
+ )
+)
+
+//Add a click event to the cluster layer so we can zoom in when a user clicks a cluster.
+map.events.add(OnFeatureClick { features: List<Feature?>? ->
+ if (features.size() > 0) {
+ //Get the clustered point from the event.
+ val cluster: Feature = features.get(0)
+
+ //Get the cluster expansion zoom level. This is the zoom level at which the cluster starts to break apart.
+ val expansionZoom: Int = source.getClusterExpansionZoom(cluster)
+
+ //Update the map camera to be centered over the cluster.
+ map.setCamera(
+
+ //Center the map over the cluster points location.
+ center(cluster.geometry() as Point?),
+
+ //Zoom to the clusters expansion zoom level.
+ zoom(expansionZoom),
+
+ //Animate the movement of the camera to the new position.
+ animationType(AnimationType.EASE),
+ animationDuration(200)
+ )
+ }
+ true
+}, clusterBubbleLayer)
+```
++
+The following image shows the above code display clustered points on a map that when clicked, zoom into the next zoom level that a cluster starts to break apart and expand.
+
+![Map of clustered features zooming in and breaking apart when clicked](media/clustering-point-data-android-sdk/android-cluster-expansion.gif)
+
+## Display cluster area
+
+The point data that a cluster represents is spread over an area. In this sample when the mouse is hovered over a cluster, two main behaviors occur. First, the individual data points contained in the cluster will be used to calculate a convex hull. Then, the convex hull will be displayed on the map to show an area. A convex hull is a polygon that wraps a set of points like an elastic band and can be calculated using the `atlas.math.getConvexHull` method. All points contained in a cluster can be retrieved from the data source using the `getClusterLeaves` method.
++
+``` java
+//Create a data source and add it to the map.
+DataSource source = new DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true)
+);
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson");
+
+//Add data source to the map.
+map.sources.add(source);
+
+//Create a data source for the convex hull polygon. Since this will be updated frequently it is more efficient to separate this into its own data source.
+DataSource polygonDataSource = new DataSource();
+
+//Add data source to the map.
+map.sources.add(polygonDataSource);
+
+//Add a polygon layer and a line layer to display the convex hull.
+map.layers.add(new PolygonLayer(polygonDataSource));
+map.layers.add(new LineLayer(polygonDataSource));
+
+//Create a symbol layer to render the clusters.
+SymbolLayer clusterLayer = new SymbolLayer(source,
+ iconImage("marker-red"),
+ textField(get("point_count_abbreviated")),
+ textOffset(new Float[] { 0f, -1.2f }),
+ textColor(Color.WHITE),
+ textSize(14f),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ SymbolLayerOptions.filter(has("point_count"))
+);
+map.layers.add(clusterLayer);
+
+//Create a layer to render the individual locations.
+map.layers.add(new SymbolLayer(source,
+ //Filter out clustered points from this layer.
+ SymbolLayerOptions.filter(not(has("point_count")))
+));
+
+//Add a click event to the layer so we can calculate the convex hull of all the points within a cluster.
+map.events.add((OnFeatureClick) (features) -> {
+ if(features.size() > 0) {
+ //Get the clustered point from the event.
+ Feature cluster = features.get(0);
+
+ //Get all points in the cluster. Set the offset to 0 and the max long value to return all points.
+ FeatureCollection leaves = source.getClusterLeaves(cluster, Long.MAX_VALUE, 0);
+
+ //Get the point features from the feature collection.
+ List<Feature> childFeatures = leaves.features();
+
+ //When only two points in a cluster. Render a line.
+ if(childFeatures.size() == 2){
+ //Extract the geometry points from the child features.
+ List<Point> points = new ArrayList();
+
+ childFeatures.forEach(f -> {
+ points.add((Point)f.geometry());
+ });
+
+ //Create a line from the points.
+ polygonDataSource.setShapes(LineString.fromLngLats(points));
+ } else {
+ Polygon hullPolygon = MapMath.getConvexHull(leaves);
+
+ //Overwrite all data in the polygon data source with the newly calculated convex hull polygon.
+ polygonDataSource.setShapes(hullPolygon);
+ }
+ }
+
+ //Return true indicating if event should be consumed and not passed further to other listeners registered afterwards, false otherwise.
+ return true;
+}, clusterLayer);
+```
+++
+```kotlin
+//Create a data source and add it to the map.
+val source = DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true)
+)
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson")
+
+//Add data source to the map.
+map.sources.add(source)
+
+//Create a data source for the convex hull polygon. Since this will be updated frequently it is more efficient to separate this into its own data source.
+val polygonDataSource = DataSource()
+
+//Add data source to the map.
+map.sources.add(polygonDataSource)
+
+//Add a polygon layer and a line layer to display the convex hull.
+map.layers.add(PolygonLayer(polygonDataSource))
+map.layers.add(LineLayer(polygonDataSource))
+
+//Create a symbol layer to render the clusters.
+val clusterLayer = SymbolLayer(
+ source,
+ iconImage("marker-red"),
+ textField(get("point_count_abbreviated")),
+ textOffset(arrayOf(0f, -1.2f)),
+ textColor(Color.WHITE),
+ textSize(14f),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ SymbolLayerOptions.filter(has("point_count"))
+)
+map.layers.add(clusterLayer)
+
+//Create a layer to render the individual locations.
+map.layers.add(
+ SymbolLayer(
+ source,
+
+ //Filter out clustered points from this layer.
+ SymbolLayerOptions.filter(not(has("point_count")))
+ )
+)
+
+//Add a click event to the layer so we can calculate the convex hull of all the points within a cluster.
+map.events.add(OnFeatureClick { features: List<Feature?>? ->
+ if (features.size() > 0) {
+ //Get the clustered point from the event.
+ val cluster: Feature = features.get(0)
+
+ //Get all points in the cluster. Set the offset to 0 and the max long value to return all points.
+ val leaves: FeatureCollection = source.getClusterLeaves(cluster, Long.MAX_VALUE, 0)
+
+ //Get the point features from the feature collection.
+ val childFeatures = leaves.features()
+
+ //When only two points in a cluster. Render a line.
+ if (childFeatures!!.size == 2) {
+ //Extract the geometry points from the child features.
+ val points: MutableList<Point?> = ArrayList()
+ childFeatures!!.forEach(Consumer { f: Feature ->
+ points.add(
+ f.geometry() as Point?
+ )
+ })
+
+ //Create a line from the points.
+ polygonDataSource.setShapes(LineString.fromLngLats(points))
+ } else {
+ val hullPolygon: Polygon = MapMath.getConvexHull(leaves)
+
+ //Overwrite all data in the polygon data source with the newly calculated convex hull polygon.
+ polygonDataSource.setShapes(hullPolygon)
+ }
+ }
+ true
+}, clusterLayer)
+```
++
+The following image shows the above code display the area of all points within a clicked clustered.
+
+![Map showing convex hull polygon of all points within a clicked cluster](media/clustering-point-data-android-sdk/android-cluster-leaves-convex-hull.gif)
+
+## Aggregating data in clusters
+
+Often clusters are represented using a symbol with the number of points that are within the cluster. But, sometimes it's desirable to customize the style of clusters with additional metrics. With cluster properties, custom properties can be created and equal to a calculation based on the properties within each point with a cluster. Cluster properties can be defined in `clusterProperties` option of the `DataSource`.
+
+The following code calculates a count based on the entity type property of each data point in a cluster. When a user clicks on a cluster, a popup shows with additional information about the cluster.
++
+``` java
+//An array of all entity type property names in features of the data set.
+String[] entityTypes = new String[] { "Gas Station", "Grocery Store", "Restaurant", "School" };
+
+//Create a popup and add it to the map.
+Popup popup = new Popup();
+map.popups.add(popup);
+
+//Close the popup initially.
+popup.close();
+
+//Create a data source and add it to the map.
+source = new DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(50),
+
+ //Calculate counts for each entity type in a cluster as custom aggregate properties.
+ clusterProperties(new ClusterProperty[]{
+ new ClusterProperty("Gas Station", sum(accumulated(), get("Gas Station")), switchCase(eq(get("EntityType"), literal("Gas Station")), literal(1), literal(0))),
+ new ClusterProperty("Grocery Store", sum(accumulated(), get("Grocery Store")), switchCase(eq(get("EntityType"), literal("Grocery Store")), literal(1), literal(0))),
+ new ClusterProperty("Restaurant", sum(accumulated(), get("Restaurant")), switchCase(eq(get("EntityType"), literal("Restaurant")), literal(1), literal(0))),
+ new ClusterProperty("School", sum(accumulated(), get("School")), switchCase(eq(get("EntityType"), literal("School")), literal(1), literal(0)))
+ })
+);
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/SamplePoiDataSet.json");
+
+//Add data source to the map.
+map.sources.add(source);
+
+//Create a bubble layer for rendering clustered data points.
+BubbleLayer clusterBubbleLayer = new BubbleLayer(source,
+ bubbleRadius(20f),
+ bubbleColor("purple"),
+ bubbleStrokeWidth(0f),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ BubbleLayerOptions.filter(has("point_count"))
+);
+
+//Add the clusterBubbleLayer and two additional layers to the map.
+map.layers.add(clusterBubbleLayer);
+
+//Create a symbol layer to render the count of locations in a cluster.
+map.layers.add(new SymbolLayer(source,
+ //Hide the icon image.
+ iconImage("none"),
+
+ //Display the 'point_count_abbreviated' property value.
+ textField(get("point_count_abbreviated")),
+
+ textColor(Color.WHITE),
+ textOffset(new Float[] { 0f, 0.4f }),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ SymbolLayerOptions.filter(has("point_count"))
+));
+
+//Create a layer to render the individual locations.
+map.layers.add(new SymbolLayer(source,
+ //Filter out clustered points from this layer.
+ SymbolLayerOptions.filter(not(has("point_count")))
+));
+
+//Add a click event to the cluster layer and display the aggregate details of the cluster.
+map.events.add((OnFeatureClick) (features) -> {
+ if(features.size() > 0) {
+ //Get the clustered point from the event.
+ Feature cluster = features.get(0);
+
+ //Create a number formatter that removes decimal places.
+ NumberFormat nf = DecimalFormat.getInstance();
+ nf.setMaximumFractionDigits(0);
+
+ //Create the popup's content.
+ StringBuilder sb = new StringBuilder();
+
+ sb.append("Cluster size: ");
+ sb.append(nf.format(cluster.getNumberProperty("point_count")));
+ sb.append(" entities\n");
+
+ for(int i = 0; i < entityTypes.length; i++) {
+ sb.append("\n");
+
+ //Get the entity type name.
+ sb.append(entityTypes[i]);
+ sb.append(": ");
+
+ //Get the aggregated entity type count from the properties of the cluster by name.
+ sb.append(nf.format(cluster.getNumberProperty(entityTypes[i])));
+ }
+
+ //Retrieve the custom layout for the popup.
+ View customView = LayoutInflater.from(this).inflate(R.layout.popup_text, null);
+
+ //Access the text view within the custom view and set the text to the title property of the feature.
+ TextView tv = customView.findViewById(R.id.message);
+ tv.setText(sb.toString());
+
+ //Get the position of the cluster.
+ Position pos = MapMath.getPosition((Point)cluster.geometry());
+
+ //Set the options on the popup.
+ popup.setOptions(
+ //Set the popups position.
+ position(pos),
+
+ //Set the anchor point of the popup content.
+ anchor(AnchorType.BOTTOM),
+
+ //Set the content of the popup.
+ content(customView)
+ );
+
+ //Open the popup.
+ popup.open();
+ }
+
+ //Return a boolean indicating if event should be consumed or continue bubble up.
+ return true;
+}, clusterBubbleLayer);
+```
+++
+```kotlin
+//An array of all entity type property names in features of the data set.
+val entityTypes = arrayOf("Gas Station", "Grocery Store", "Restaurant", "School")
+
+//Create a popup and add it to the map.
+val popup = Popup()
+map.popups.add(popup)
+
+//Close the popup initially.
+popup.close()
+
+//Create a data source and add it to the map.
+val source = DataSource(
+ //Tell the data source to cluster point data.
+ cluster(true),
+
+ //The radius in pixels to cluster points together.
+ clusterRadius(50),
+
+ //Calculate counts for each entity type in a cluster as custom aggregate properties.
+ clusterProperties(
+ arrayOf<ClusterProperty>(
+ ClusterProperty("Gas Station", sum(accumulated(), get("Gas Station")), switchCase(eq(get("EntityType"), literal("Gas Station")), literal(1), literal(0))),
+ ClusterProperty("Grocery Store", sum(accumulated(), get("Grocery Store")), switchCase(eq(get("EntityType"), literal("Grocery Store")), literal(1), literal(0))),
+ ClusterProperty("Restaurant", sum(accumulated(), get("Restaurant")), switchCase(eq(get("EntityType"), literal("Restaurant")), literal(1), literal(0))),
+ ClusterProperty("School", sum(accumulated(), get("School")), switchCase(eq(get("EntityType"), literal("School")), literal(1), literal(0)))
+ )
+ )
+)
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/SamplePoiDataSet.json")
+
+//Add data source to the map.
+map.sources.add(source)
+
+//Create a bubble layer for rendering clustered data points.
+val clusterBubbleLayer = BubbleLayer(
+ source,
+ bubbleRadius(20f),
+ bubbleColor("purple"),
+ bubbleStrokeWidth(0f),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ BubbleLayerOptions.filter(has("point_count"))
+)
+
+//Add the clusterBubbleLayer and two additional layers to the map.
+map.layers.add(clusterBubbleLayer)
+
+//Create a symbol layer to render the count of locations in a cluster.
+map.layers.add(
+ SymbolLayer(
+ source,
+
+ //Hide the icon image.
+ iconImage("none"),
+
+ //Display the 'point_count_abbreviated' property value.
+ textField(get("point_count_abbreviated")),
+
+ textColor(Color.WHITE),
+ textOffset(arrayOf(0f, 0.4f)),
+
+ //Only rendered data points which have a point_count property, which clusters do.
+ SymbolLayerOptions.filter(has("point_count"))
+ )
+)
+
+//Create a layer to render the individual locations.
+map.layers.add(
+ SymbolLayer(
+ source,
+
+ //Filter out clustered points from this layer.
+ SymbolLayerOptions.filter(not(has("point_count")))
+ )
+)
+
+//Add a click event to the cluster layer and display the aggregate details of the cluster.
+map.events.add(OnFeatureClick { features: List<Feature> ->
+ if (features.size > 0) {
+ //Get the clustered point from the event.
+ val cluster = features[0]
+
+ //Create a number formatter that removes decimal places.
+ val nf: NumberFormat = DecimalFormat.getInstance()
+ nf.setMaximumFractionDigits(0)
+
+ //Create the popup's content.
+ val sb = StringBuilder()
+
+ sb.append("Cluster size: ")
+ sb.append(nf.format(cluster.getNumberProperty("point_count")))
+ sb.append(" entities\n")
+
+ for (i in entityTypes.indices) {
+ sb.append("\n")
+
+ //Get the entity type name.
+ sb.append(entityTypes[i])
+ sb.append(": ")
+
+ //Get the aggregated entity type count from the properties of the cluster by name.
+ sb.append(nf.format(cluster.getNumberProperty(entityTypes[i])))
+ }
+
+ //Retrieve the custom layout for the popup.
+ val customView: View = LayoutInflater.from(this).inflate(R.layout.popup_text, null)
+
+ //Access the text view within the custom view and set the text to the title property of the feature.
+ val tv: TextView = customView.findViewById(R.id.message)
+ tv.text = sb.toString()
+
+ //Get the position of the cluster.
+ val pos: Position = MapMath.getPosition(cluster.geometry() as Point?)
+
+ //Set the options on the popup.
+ popup.setOptions(
+ //Set the popups position.
+ position(pos),
+
+ //Set the anchor point of the popup content.
+ anchor(AnchorType.BOTTOM),
+
+ //Set the content of the popup.
+ content(customView)
+ )
+
+ //Open the popup.
+ popup.open()
+ }
+
+ //Return a boolean indicating if event should be consumed or continue bubble up.
+ true
+} as OnFeatureClick, clusterBubbleLayer)
+```
++
+The popup follows the steps outlined in the [display a popup](display-feature-information-android.md?#display-a-popup) document.
+
+The following image shows the above code display a popup with aggregated counts of each entity value type for all points in the clicked clustered point.
+
+![Map showing popup of aggregated counts of entity types of all points in a cluster](media/clustering-point-data-android-sdk/android-cluster-aggregates.gif)
+
+## Next steps
+
+To add more data to your map:
+
+> [!div class="nextstepaction"]
+> [Create a data source](create-data-source-android-sdk.md)
+
+> [!div class="nextstepaction"]
+> [Add a symbol layer](how-to-add-symbol-to-android-map.md)
+
+> [!div class="nextstepaction"]
+> [Add a bubble layer](map-add-bubble-layer-android.md)
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/clustering-point-data-web-sdk.md
Title: Clustering point data on a map | Microsoft Azure Maps
+ Title: Clustering point data in the Web SDK | Microsoft Azure Maps
description: Learn how to cluster point data on maps. See how to use the Azure Maps Web SDK to cluster data, react to cluster mouse events, and display cluster aggregates.
-# Clustering point data
+# Clustering point data in the Web SDK
When visualizing many data points on the map, data points may overlap over each other. The overlap may cause the map may become unreadable and difficult to use. Clustering point data is the process of combining point data that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points. When you work with large number of data points, use the clustering processes to improve your user experience.
azure-maps Create Data Source Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/create-data-source-android-sdk.md
feature.addStringProperty("custom-property", "value")
source.add(feature) ```
+> [!TIP]
+> GeoJSON data can be added to a `DataSource` instance using one of three methods; `add`, `importDataFromUrl`, and `setShapes`. The `setShapes` method provides an efficient way to overwrite all data in a data source. If you call the `clear` then `add` methods to replace all data in a data source, two render calls will be made to the map. The `setShape` method clears and adds the data to the data source with a single render call to the map.
+ ::: zone-end Alternatively the properties can be loaded into a JsonObject first then passed into the feature when creating it, as shown below.
val featureString = feature.toJson()
Most GeoJSON files contain a FeatureCollection. Read GeoJSON files as strings and used the `FeatureCollection.fromJson` method to deserialize it.
-The following code is a reusable class for importing data from the web or local assets folder as a string and returning it to the UI thread via a callback function.
+The `DataSource` class has a built in method called `importDataFromUrl` that can load in GeoJSON files using a URL to a file on the web or in the asset folder. This method **must** be called before the data source is added to the map.
+
+zone_pivot_groups: azure-maps-android
++
+``` java
+//Create a data source and add it to the map.
+DataSource source = new DataSource();
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("URL_or_FilePath_to_GeoJSON_data");
+
+//Examples:
+//source.importDataFromUrl("asset://sample_file.json");
+//source.importDataFromUrl("https://example.com/sample_file.json");
+
+//Add data source to the map.
+map.sources.add(source);
+```
+++
+```kotlin
+//Create a data source and add it to the map.
+var source = new DataSource()
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("URL_or_FilePath_to_GeoJSON_data")
+
+//Examples:
+//source.importDataFromUrl("asset://sample_file.json")
+//source.importDataFromUrl("https://example.com/sample_file.json")
+
+//Add data source to the map.
+map.sources.add(source)
+```
++
+The `importDataFromUrl` method provides an easily way to load a GeoJSON feed into a data source but provides limited control on how the data is loaded and what happens after its been loaded. The following code is a reusable class for importing data from the web or assets folder and returning it to the UI thread via a callback function. In the callback you can then add additional post load logic to process the data, add it to the map, calculate its bounding box, and update the maps camera.
::: zone pivot="programming-language-java-android"
The code below shows how to use this utility to import GeoJSON data as a string
```java //Create a data source and add it to the map.
-DataSource dataSource = new DataSource();
-map.sources.add(dataSource);
+DataSource source = new DataSource();
+map.sources.add(source);
//Import the geojson data and add it to the data source. Utils.importData("URL_or_FilePath_to_GeoJSON_data",
Utils.importData("URL_or_FilePath_to_GeoJSON_data",
FeatureCollection fc = FeatureCollection.fromJson(result); //Add the feature collection to the data source.
- dataSource.add(fc);
+ source.add(fc);
//Optionally, update the maps camera to focus in on the data.
Utils.importData("SamplePoiDataSet.json", this) {
::: zone-end
+### Update a feature
+
+The `DataSource` class makes its easy to add and remove features. Updating the geometry or properties of a feature requires replacing the feature in the data source. There are two methods that can be used to update a feature(s):
+
+1. Create the new feature(s) with the desired updates and replace all features in the data source using the `setShapes` method. This method works well when you want to update all features in a data source.
++
+``` java
+DataSource source;
+
+private void onReady(AzureMap map) {
+ //Create a data source and add it to the map.
+ source = new DataSource();
+ map.sources.add(source);
+
+ //Create a feature and add it to the data source.
+ Feature myFeature = Feature.fromGeometry(Point.fromLngLat(0,0));
+ myFeature.addStringProperty("Name", "Original value");
+
+ source.add(myFeature);
+}
+
+private void updateFeature(){
+ //Create a new replacement feature with an updated geometry and property value.
+ Feature myNewFeature = Feature.fromGeometry(Point.fromLngLat(-10, 10));
+ myNewFeature.addStringProperty("Name", "New value");
+
+ //Replace all features to the data source with the new one.
+ source.setShapes(myNewFeature);
+}
+```
+++
+```kotlin
+var source: DataSource? = null
+
+private fun onReady(map: AzureMap) {
+ //Create a data source and add it to the map.
+ source = DataSource()
+ map.sources.add(source)
+
+ //Create a feature and add it to the data source.
+ val myFeature = Feature.fromGeometry(Point.fromLngLat(0.0, 0.0))
+ myFeature.addStringProperty("Name", "Original value")
+ source!!.add(myFeature)
+}
+
+private fun updateFeature() {
+ //Create a new replacement feature with an updated geometry and property value.
+ val myNewFeature = Feature.fromGeometry(Point.fromLngLat(-10.0, 10.0))
+ myNewFeature.addStringProperty("Name", "New value")
+
+ //Replace all features to the data source with the new one.
+ source!!.setShapes(myNewFeature)
+}
+```
++
+2. Keep track of the feature instance in a variable, and pass it into the data sources `remove` method to remove it. Create the new feature(s) with the desired updates, updated the variable reference, and add it to the data source using the `add` method.
++
+``` java
+DataSource source;
+Feature myFeature;
+
+private void onReady(AzureMap map) {
+ //Create a data source and add it to the map.
+ source = new DataSource();
+ map.sources.add(source);
+
+ //Create a feature and add it to the data source.
+ myFeature = Feature.fromGeometry(Point.fromLngLat(0,0));
+ myFeature.addStringProperty("Name", "Original value");
+
+ source.add(myFeature);
+}
+
+private void updateFeature(){
+ //Remove the feature instance from the data source.
+ source.remove(myFeature);
+
+ //Get properties from original feature.
+ JsonObject props = myFeature.properties();
+
+ //Update a property.
+ props.addProperty("Name", "New value");
+
+ //Create a new replacement feature with an updated geometry.
+ myFeature = Feature.fromGeometry(Point.fromLngLat(-10, 10), props);
+
+ //Re-add the feature to the data source.
+ source.add(myFeature);
+}
+```
+++
+```kotlin
+var source: DataSource? = null
+var myFeature: Feature? = null
+
+private fun onReady(map: AzureMap) {
+ //Create a data source and add it to the map.
+ source = DataSource()
+ map.sources.add(source)
+
+ //Create a feature and add it to the data source.
+ myFeature = Feature.fromGeometry(Point.fromLngLat(0.0, 0.0))
+ myFeature.addStringProperty("Name", "Original value")
+ source!!.add(myFeature)
+}
+
+private fun updateFeature() {
+ //Remove the feature instance from the data source.
+ source!!.remove(myFeature)
+
+ //Get properties from original feature.
+ val props = myFeature!!.properties()
+
+ //Update a property.
+ props!!.addProperty("Name", "New value")
+
+ //Create a new replacement feature with an updated geometry.
+ myFeature = Feature.fromGeometry(Point.fromLngLat(-10.0, 10.0), props)
+
+ //Re-add the feature to the data source.
+ source!!.add(myFeature)
+}
+```
++
+> [!TIP]
+> If you have some data that is going to be regularly updated, and other data that is will rarely be changed, it is best to split these into separate data source instances. When an update occurs in a data source it forces the map to repaint all features in the data source. By splitting this data up, only the features that are regularly updated would be repainted when an update occurs in that one data source while the features in the other data source wouldn't need to be repainted. This helps with performance.
+ ## Vector tile source A vector tile source describes how to access a vector tile layer. Use the `VectorTileSource` class to instantiate a vector tile source. Vector tile layers are similar to tile layers, but they aren't the same. A tile layer is a raster image. Vector tile layers are a compressed file, in **PBF** format. This compressed file contains vector map data, and one or more layers. The file can be rendered and styled on the client, based on the style of each layer. The data in a vector tile contain geographic features in the form of points, lines, and polygons. There are several advantages of using vector tile layers instead of raster tile layers:
The following code shows how to create a data source, add it to the map, and con
```java //Create a data source and add it to the map. DataSource source = new DataSource();+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("URL_or_FilePath_to_GeoJSON_data");
+
+//Add data source to the map.
map.sources.add(source); //Create a layer that defines how to render points in the data source and add it to the map. BubbleLayer layer = new BubbleLayer(source); map.layers.add(layer);-
-//Import the geojson data and add it to the data source.
-Utils.importData("URL_or_FilePath_to_GeoJSON_data",
- this,
- (String result) -> {
- //Parse the data as a GeoJSON Feature Collection.
- FeatureCollection fc = FeatureCollection.fromJson(result);
-
- //Add the feature collection to the data source.
- dataSource.add(fc);
-
- //Optionally, update the maps camera to focus in on the data.
-
- //Calculate the bounding box of all the data in the Feature Collection.
- BoundingBox bbox = MapMath.fromData(fc);
-
- //Update the maps camera so it is focused on the data.
- map.setCamera(
- bounds(bbox),
- padding(20));
- });
``` ::: zone-end
Utils.importData("URL_or_FilePath_to_GeoJSON_data",
```kotlin //Create a data source and add it to the map. val source = DataSource()
-map.sources.add(source)
-
-//Create a layer that defines how to render points in the data source and add it to the map.
-val layer = BubbleLayer(source)
-map.layers.add(layer)
//Import the geojson data and add it to the data source.
-Utils.importData("URL_or_FilePath_to_GeoJSON_data", this) {
- result: String? ->
- //Parse the data as a GeoJSON Feature Collection.
- val fc = FeatureCollection.fromJson(result!!)
-
- //Add the feature collection to the data source.
- dataSource.add(fc)
-
- //Optionally, update the maps camera to focus in on the data.
- //Calculate the bounding box of all the data in the Feature Collection.
- val bbox = MapMath.fromData(fc)
-
- //Update the maps camera so it is focused on the data.
- map.setCamera(
- bounds(bbox),
- padding(20)
- )
- }
+source.importDataFromUrl("URL_or_FilePath_to_GeoJSON_data")
+
+//Add data source to the map.
+map.sources.add(source)
``` ::: zone-end
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Use data-driven style expressions](create-data-source-android-sdk.md)
+> [!div class="nextstepaction"]
+> [Cluster point data](clustering-point-data-android-sdk.md)
+ > [!div class="nextstepaction"] > [Add a symbol layer](how-to-add-symbol-to-android-map.md)
See the following articles for more code samples to add to your maps:
> [Add a heat map](map-add-heat-map-layer-android.md) > [!div class="nextstepaction"]
-> [Web SDK Code samples](/samples/browse/?products=azure-maps)
+> [Web SDK Code samples](/samples/browse/?products=azure-maps)
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-android-sdk.md
Data expressions provide access to the property data in a feature.
| Expression | Return type | Description | ||-|-|
-| `accumulated()` | number | Gets the value of a cluster property accumulated so far. |
+| `accumulated()` | number | Gets the value of a cluster property accumulated so far. This can only be used in the `clusterProperties` option of a clustered `DataSource` source. |
| `at(number | Expression, Expression)` | value | Retrieves an item from an array. | | `geometryType()` | string | Gets the feature's geometry type: Point, MultiPoint, LineString, MultiLineString, Polygon, MultiPolygon. | | `get(string | Expression)` \| `get(string | Expression, Expression)` | value | Gets the property value from the properties of the provided object. Returns null if the requested property is missing. |
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/data-driven-style-expressions-web-sdk.md
This video provides an overview of data-driven styling in the Azure Maps Web SDK
>[!VIDEO https://channel9.msdn.com/Shows/Internet-of-Things-Show/Data-Driven-Styling-with-Azure-Maps/player?format=ny]
-Expressions are represented as JSON arrays. The first element of an expression in the array is a string that specifies the name of the expression operator. For example, "+" or "case". The next elements (if any) are the arguments to the expression. Each argument is either a literal value (a string, number, boolean, or `null`), or another expression array. The following pseudocode defines the basic structure of an expression.
+Expressions are represented as JSON arrays. The first element of an expression in the array is a string that specifies the name of the expression operator. For example, "+" or "case". The next elements (if any) are the arguments to the expression. Each argument is either a literal value (a string, number, boolean, or `null`), or another expression array. The following pseudocode defines the basic structure of an expression.
```javascript [
The Azure Maps Web SDK supports many types of expressions. Expressions can be us
| [Variable binding expressions](#variable-binding-expressions) | Variable binding expressions store the results of a calculation in a variable and referenced elsewhere in an expression multiple times without having to recalculate the stored value. | | [Zoom expression](#zoom-expression) | Retrieves the current zoom level of the map at render time. |
-All examples in this document use the following feature to demonstrate different ways in which the different types of expressions can be used.
+All examples in this document use the following feature to demonstrate different ways in which the different types of expressions can be used.
```json {
- "type": "Feature",
- "geometry": {
- "type": "Point",
- "coordinates": [-122.13284, 47.63699]
- },
- "properties": {
+ "type": "Feature",
+ "geometry": {
+ "type": "Point",
+ "coordinates": [-122.13284, 47.63699]
+ },
+ "properties": {
"id": 123, "entityType": "restaurant", "revenue": 12345,
All examples in this document use the following feature to demonstrate different
"_style": { "fillColor": "red" }
- }
+ }
} ``` ## Data expressions
-Data expressions provide access to the property data in a feature.
+Data expressions provide access to the property data in a feature.
| Expression | Return type | Description | ||-|-|
Data expressions provide access to the property data in a feature.
**Examples**
-Properties of a feature can be accessed directly in an expression by using a `get` expression. This example uses the `zoneColor` value of the feature to specify the color property of a bubble layer.
+Properties of a feature can be accessed directly in an expression by using a `get` expression. This example uses the `zoneColor` value of the feature to specify the color property of a bubble layer.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
var layer = new atlas.layer.BubbleLayer(datasource, null, {
}); ```
-The following example allows both `Point` and `MultiPoint` features to be rendered.
+The following example allows both `Point` and `MultiPoint` features to be rendered.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
Math expressions provide mathematical operators to perform data-driven calculati
## Aggregate expression
-An aggregate expression defines a calculation that's processed over a set of data and can be used with the `clusterProperties` option of a `DataSource`. The output of these expressions must be a number or a boolean.
+An aggregate expression defines a calculation that's processed over a set of data and can be used with the `clusterProperties` option of a `DataSource`. The output of these expressions must be a number or a boolean.
An aggregate expression takes in three values: an operator value, and initial value, and an expression to retrieve a property from each feature in a data to apply the aggregate operation on. This expression has the following format:
An aggregate expression takes in three values: an operator value, and initial va
[operator: string, initialValue: boolean | number, mapExpression: Expression] ``` -- operator: An expression function that's then applied to against all values calculated by the `mapExpression` for each point in the cluster. Supported operators:
- - For numbers: `+`, `*`, `max`, `min`
- - For Booleans: `all`, `any`
+- operator: An expression function that's then applied to against all values calculated by the `mapExpression` for each point in the cluster. Supported operators:
+ - For numbers: `+`, `*`, `max`, `min`
+ - For Booleans: `all`, `any`
- initialValue: An initial value in which the first calculated value is aggregated against. - mapExpression: An expression that's applied against each point in the data set.
The `accumulated` expression gets the value of a cluster property accumulated s
Boolean expressions provide a set of boolean operators expressions for evaluating boolean comparisons.
-When comparing values, the comparison is strictly typed. Values of different types are always considered unequal. Cases where the types are known to be different at parse time are considered invalid and will produce a parse error.
+When comparing values, the comparison is strictly typed. Values of different types are always considered unequal. Cases where the types are known to be different at parse time are considered invalid and will produce a parse error.
| Expression | Return type | Description | ||-|-|
When comparing values, the comparison is strictly typed. Values of different typ
Conditional expressions provide logic operations that are like if-statements.
-The following expressions perform conditional logic operations on the input data. For example, the `case` expression provides "if/then/else" logic while the `match` expression is like a "switch-statement".
+The following expressions perform conditional logic operations on the input data. For example, the `case` expression provides "if/then/else" logic while the `match` expression is like a "switch-statement".
### Case expression A `case` expression is a type of conditional expression that provides "if/then/else" logic. This type of expression steps through a list of boolean conditions. It returns the output value of the first boolean condition to evaluate to true.
-The following pseudocode defines the structure of the `case` expression.
+The following pseudocode defines the structure of the `case` expression.
```javascript [ 'case',
- condition1: boolean,
- output1: value,
- condition2: boolean,
- output2: value,
- ...,
- fallback: value
+ condition1: boolean,
+ output1: value,
+ condition2: boolean,
+ output2: value,
+ ...,
+ fallback: value
] ``` **Example**
-The following example steps through different boolean conditions until it finds one that evaluates to `true`, and then returns that associated value. If no boolean condition evaluates to `true`, a fallback value will be returned.
+The following example steps through different boolean conditions until it finds one that evaluates to `true`, and then returns that associated value. If no boolean condition evaluates to `true`, a fallback value will be returned.
```javascript var layer = new atlas.layer.BubbleLayer(datasource, null, {
var layer = new atlas.layer.BubbleLayer(datasource, null, {
A `match` expression is a type of conditional expression that provides switch-statement like logic. The input can be any expression such as `['get', 'entityType']` that returns a string or a number. Each label must be either a single literal value or an array of literal values, whose values must be all strings or all numbers. The input matches if any of the values in the array match. Each label must be unique. If the input type doesn't match the type of the labels, the result will be the fallback value.
-The following pseudocode defines the structure of the `match` expression.
+The following pseudocode defines the structure of the `match` expression.
```javascript [
var layer = new atlas.layer.BubbleLayer(datasource, null, {
### Coalesce expression
-A `coalesce` expression steps through a set of expressions until the first non-null value is obtained and returns that value.
+A `coalesce` expression steps through a set of expressions until the first non-null value is obtained and returns that value.
-The following pseudocode defines the structure of the `coalesce` expression.
+The following pseudocode defines the structure of the `coalesce` expression.
```javascript [
The following pseudocode defines the structure of the `coalesce` expression.
**Example**
-The following example uses a `coalesce` expression to set the `textField` option of a symbol layer. If the `title` property is missing from the feature or set to `null`, the expression will then try looking for the `subTitle` property, if its missing or `null`, it will then fall back to an empty string.
+The following example uses a `coalesce` expression to set the `textField` option of a symbol layer. If the `title` property is missing from the feature or set to `null`, the expression will then try looking for the `subTitle` property, if its missing or `null`, it will then fall back to an empty string.
```javascript var layer = new atlas.layer.SymbolLayer(datasource, null, {
var layer = new atlas.layer.SymbolLayer(datasource, null, {
The above expression renders a pin on the map with the text "64┬░F" overlaid on top of it as shown in the image below.
-<center>
-
-![String operator expression example](media/how-to-expressions/string-operator-expression.png) </center>
+![String operator expression example](media/how-to-expressions/string-operator-expression.png)
## Interpolate and Step expressions
Interpolate and step expressions can be used to calculate values along an interp
An `interpolate` expression can be used to calculate a continuous, smooth set of values by interpolating between stop values. An `interpolate` expression that returns color values produces a color gradient in which result values are selected from. There are three types of interpolation methods that can be used in an `interpolate` expression:
-
-* `['linear']` - Interpolates linearly between the pair of stops.
-* `['exponential', base]` - Interpolates exponentially between the stops. The `base` value controls the rate at which the output increases. Higher values make the output increase more towards the high end of the range. A `base` value close to 1 produces an output that increases more linearly.
-* `['cubic-bezier', x1, y1, x2, y2]` - Interpolates using a [cubic Bezier curve](https://developer.mozilla.org/docs/Web/CSS/timing-function) defined by the given control points.
-Here is an example of what these different types of interpolations look like.
+- `['linear']` - Interpolates linearly between the pair of stops.
+- `['exponential', base]` - Interpolates exponentially between the stops. The `base` value controls the rate at which the output increases. Higher values make the output increase more towards the high end of the range. A `base` value close to 1 produces an output that increases more linearly.
+- `['cubic-bezier', x1, y1, x2, y2]` - Interpolates using a [cubic Bezier curve](https://developer.mozilla.org/docs/Web/CSS/timing-function) defined by the given control points.
+
+Here is an example of what these different types of interpolations look like.
| Linear | Exponential | Cubic Bezier | ||-|--| | ![Linear interpolation graph](media/how-to-expressions/linear-interpolation.png) | ![Exponential interpolation graph](media/how-to-expressions/exponential-interpolation.png) | ![Cubic Bezier interpolation graph](media/how-to-expressions/bezier-curve-interpolation.png) |
-The following pseudocode defines the structure of the `interpolate` expression.
+The following pseudocode defines the structure of the `interpolate` expression.
```javascript [
- 'interpolate',
- interpolation: ['linear'] | ['exponential', base] | ['cubic-bezier', x1, y1, x2, y2],
- input: number,
- stopInput1: number,
- stopOutput1: value1,
- stopInput2: number,
- stopOutput2: value2,
- ...
+ 'interpolate',
+ interpolation: ['linear'] | ['exponential', base] | ['cubic-bezier', x1, y1, x2, y2],
+ input: number,
+ stopInput1: number,
+ stopOutput1: value1,
+ stopInput2: number,
+ stopOutput2: value2,
+ ...
] ```
var layer = new atlas.layer.BubbleLayer(datasource, null, {
The following image demonstrates how the colors are chosen for the above expression.
-<center>
-
-![Interpolate expression example](media/how-to-expressions/interpolate-expression-example.png) </center>
+![Interpolate expression example](media/how-to-expressions/interpolate-expression-example.png)
### Step expression
-A `step` expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](http://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
+A `step` expression can be used to calculate discrete, stepped result values by evaluating a [piecewise-constant function](http://mathworld.wolfram.com/PiecewiseConstantFunction.html) defined by stops.
-The following pseudocode defines the structure of the `step` expression.
+The following pseudocode defines the structure of the `step` expression.
```javascript [
- 'step',
- input: number,
- output0: value0,
- stop1: number,
- output1: value1,
- stop2: number,
- output2: value2,
- ...
+ 'step',
+ input: number,
+ output0: value0,
+ stop1: number,
+ output1: value1,
+ stop2: number,
+ output2: value2,
+ ...
] ```
-Step expressions return the output value of the stop just before the input value, or the first input value if the input is less than the first stop.
+Step expressions return the output value of the stop just before the input value, or the first input value if the input is less than the first stop.
**Example**
var layer = new atlas.layer.BubbleLayer(datasource, null, {
``` The following image demonstrates how the colors are chosen for the above expression.
-
-<center>
![Step expression example](media/how-to-expressions/step-expression-example.png)
-</center>
## Layer-specific expressions
A heat map density expression retrieves the heat map density value for each pixe
**Example**
-This example uses a liner interpolation expression to create a smooth color gradient for rendering the heat map.
+This example uses a liner interpolation expression to create a smooth color gradient for rendering the heat map.
-```javascript
+```javascript
var layer = new atlas.layer.HeatMapLayer(datasource, null, { color: [ 'interpolate',
var layer = new atlas.layer.HeatMapLayer(datasource, null, {
In addition to using a smooth gradient to colorize a heat map, colors can be specified within a set of ranges by using a `step` expression. Using a `step` expression for colorizing the heat map visually breaks up the density into ranges that resembles a contour or radar style map.
-```javascript
+```javascript
var layer = new atlas.layer.HeatMapLayer(datasource, null, { color: [ 'step',
var layer = new atlas.layer.LineLayer(datasource, null, {
The text field format expression can be used with the `textField` option of the symbol layers `textOptions` property to provide mixed text formatting. This expression allows a set of input strings and formatting options to be specified. The following options can be specified for each input string in this expression.
- * `'font-scale'` - Specifies the scaling factor for the font size. If specified, this value will override the `size` property of the `textOptions` for the individual string.
- * `'text-font'` - Specifies one or more font families that should be used for this string. If specified, this value will override the `font` property of the `textOptions` for the individual string.
+- `'font-scale'` - Specifies the scaling factor for the font size. If specified, this value will override the `size` property of the `textOptions` for the individual string.
+- `'text-font'` - Specifies one or more font families that should be used for this string. If specified, this value will override the `font` property of the `textOptions` for the individual string.
-The following pseudocode defines the structure of the text field format expression.
+The following pseudocode defines the structure of the text field format expression.
```javascript [
var layer = new atlas.layer.SymbolLayer(datasource, null, {
``` This layer will render the point feature as shown in the image below:
-
-<center>
-![Image of Point feature with formatted text field](media/how-to-expressions/text-field-format-expression.png) </center>
+![Image of Point feature with formatted text field](media/how-to-expressions/text-field-format-expression.png)
### Number format expression The `number-format` expression can only be used with the `textField` option of a symbol layer. This expression converts the provided number into a formatted string. This expression wraps JavaScript's [Number.toLocalString](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString) function and supports the following set of options.
- * `locale` - Specify this option for converting numbers to strings in a way that aligns with the specified language. Pass a [BCP 47 language tag](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Intl#Locale_identification_and_negotiation) into this option.
- * `currency` - To convert the number into a string representing a currency. Possible values are the [ISO 4217 currency codes](https://en.wikipedia.org/wiki/ISO_4217), such as "USD" for the US dollar, "EUR" for the euro, or "CNY" for the Chinese RMB.
- * `'min-fraction-digits'` - Specifies the minimum number of decimal places to include in the string version of the number.
- * `'max-fraction-digits'` - Specifies the maximum number of decimal places to include in the string version of the number.
+- `locale` - Specify this option for converting numbers to strings in a way that aligns with the specified language. Pass a [BCP 47 language tag](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Intl#Locale_identification_and_negotiation) into this option.
+- `currency` - To convert the number into a string representing a currency. Possible values are the [ISO 4217 currency codes](https://en.wikipedia.org/wiki/ISO_4217), such as "USD" for the US dollar, "EUR" for the euro, or "CNY" for the Chinese RMB.
+- `'min-fraction-digits'` - Specifies the minimum number of decimal places to include in the string version of the number.
+- `'max-fraction-digits'` - Specifies the maximum number of decimal places to include in the string version of the number.
-The following pseudocode defines the structure of the text field format expression.
+The following pseudocode defines the structure of the text field format expression.
```javascript [
- 'number-format',
- input: number,
- options: {
- locale: string,
- currency: string,
- 'min-fraction-digits': number,
- 'max-fraction-digits': number
- }
+ 'number-format',
+ input: number,
+ options: {
+ locale: string,
+ currency: string,
+ 'min-fraction-digits': number,
+ 'max-fraction-digits': number
+ }
] ```
var layer = new atlas.layer.SymbolLayer(datasource, null, {
This layer will render the point feature as shown in the image below:
-<center>
-
-![Number format expression example](media/how-to-expressions/number-format-expression.png) </center>
+![Number format expression example](media/how-to-expressions/number-format-expression.png)
### Image expression
An image expression can be used with the `image` and `textField` options of a sy
**Example**
-The following example uses an `image` expression to add an icon inline with text in a symbol layer.
+The following example uses an `image` expression to add an icon inline with text in a symbol layer.
```javascript //Load the custom image icon into the map resources. map.imageSprite.add('wifi-icon', 'wifi.png').then(function () {
- //Create a data source and add it to the map.
- datasource = new atlas.source.DataSource();
- map.sources.add(datasource);
-
- //Create a point feature and add it to the data source.
- datasource.add(new atlas.data.Point(map.getCamera().center));
-
- //Add a layer for rendering point data as symbols.
- map.layers.add(new atlas.layer.SymbolLayer(datasource, null, {
- iconOptions: {
- image: 'none'
- },
- textOptions: {
- //Create a formatted text string that has an icon in it.
- textField: ["format", 'Ricky\'s ', ["image", "wifi-icon"], ' Palace']
- }
- }));
+ //Create a data source and add it to the map.
+ datasource = new atlas.source.DataSource();
+ map.sources.add(datasource);
+
+ //Create a point feature and add it to the data source.
+ datasource.add(new atlas.data.Point(map.getCamera().center));
+
+ //Add a layer for rendering point data as symbols.
+ map.layers.add(new atlas.layer.SymbolLayer(datasource, null, {
+ iconOptions: {
+ image: 'none'
+ },
+ textOptions: {
+ //Create a formatted text string that has an icon in it.
+ textField: ["format", 'Ricky\'s ', ["image", "wifi-icon"], ' Palace']
+ }
+ }));
}); ``` This layer will render the text field in the symbol layer as shown in the image below:
-<center>
-
-![Image expression example](media/how-to-expressions/image-expression.png) </center>
+![Image expression example](media/how-to-expressions/image-expression.png)
## Zoom expression
var layer = new atlas.layer.BubbleLayer(datasource, null, {
See the following articles for more code samples that implement expressions:
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Add a symbol layer](map-add-pin.md)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Add a bubble layer](map-add-bubble-layer.md) > [!div class="nextstepaction"]
See the following articles for more code samples that implement expressions:
> [!div class="nextstepaction"] > [Add a polygon layer](map-add-shape.md)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Add a heat map layer](map-add-heat-map-layer.md) Learn more about the layer options that support expressions:
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [BubbleLayerOptions](/javascript/api/azure-maps-control/atlas.bubblelayeroptions)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [HeatMapLayerOptions](/javascript/api/azure-maps-control/atlas.heatmaplayeroptions)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [LineLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [PolygonLayerOptions](/javascript/api/azure-maps-control/atlas.polygonlayeroptions)
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [SymbolLayerOptions](/javascript/api/azure-maps-control/atlas.symbollayeroptions)
azure-maps Display Feature Information Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/display-feature-information-android.md
map.events.add((OnFeatureClick)(feature) -> {
TextView tv = customView.findViewById(R.id.message); tv.setText(props.get("title").getAsString());
- //Get the coordinates from the clicked feature and create a position object.
- List<Double> c = ((Point)(f.geometry())).coordinates();
- Position pos = new Position(c.get(0), c.get(1));
+ //Get the position of the clicked feature.
+ Position pos = MapMath.getPosition((Point)cluster.geometry());
//Set the options on the popup. popup.setOptions(
map.events.add((OnFeatureClick)(feature) -> {
//Optionally, hide the close button of the popup. //, closeButton(false)
+
+ //Optionally offset the popup by a specified number of pixels.
+ //pixelOffset(new Pixel(10, 10))
); //Open the popup.
map.events.add(OnFeatureClick { feature: List<Feature> ->
val tv: TextView = customView.findViewById(R.id.message) tv.text = props!!["title"].asString
- //Get the coordinates from the clicked feature and create a position object.
- val c: List<Double> = (f.geometry() as Point?).coordinates()
- val pos = Position(c[0], c[1])
+ //Get the position of the clicked feature.
+ val pos = MapMath.getPosition(f.geometry() as Point?);
//Set the options on the popup. popup.setOptions(
map.events.add(OnFeatureClick { feature: List<Feature> ->
//Optionally, hide the close button of the popup. //, closeButton(false)
+
+ //Optionally offset the popup by a specified number of pixels.
+ //pixelOffset(Pixel(10, 10))
) //Open the popup. popup.open() //Return a boolean indicating if event should be consumed or continue bubble up.
- return false
+ false
}) ```
azure-maps How To Add Symbol To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-add-symbol-to-android-map.md
map.layers.add(layer)
::: zone-end
-The following screenshot shows the above code rending a point feature using an icon and text label with a symbol layer.
+The following screenshot shows the above code rendering a point feature using an icon and text label with a symbol layer.
![Map with point rendered using a symbol layer displaying an icon and text label for a point feature](media/how-to-add-symbol-to-android-map/android-map-pin.png)
val layer = SymbolLayer(
::: zone-end
-For this sample, the following image was loaded into the drawable folder of the app.
+For this sample, the following image is loaded into the drawable folder of the app.
| ![Weather icon image of rain showers](media/how-to-add-symbol-to-android-map/showers.png)| |:--:| | showers.png |
-The following screenshot shows the above code rending a point feature using a custom icon and formatted text label with a symbol layer.
+The following screenshot shows the above code rendering a point feature using a custom icon and formatted text label with a symbol layer.
![Map with point rendered using a symbol layer displaying a custom icon and formatted text label for a point feature](media/how-to-add-symbol-to-android-map/android-custom-symbol-layer.png)
val layer = SymbolLayer(source,
::: zone-end
-The table below lists all of the built-in icon image names available. All of these markers pull its colors from color resources that you can override. In addition to overriding the main fill color of this marker. However, note that overriding the color of one of these markers would be apply to all layers that use that icon image.
+The table below lists all of the built-in icon image names available. All of these markers pull its colors from color resources that you can override. In addition to overriding the main fill color of this marker. However, overriding the color of one of these markers would apply to all layers that use that icon image.
| Icon image name | Color resource name | |--||
-| `marker-default` | `mapcontrol_marker_default` |
-| `marker-black` | `mapcontrol_marker_black` |
-| `marker-blue` | `mapcontrol_marker_blue` |
-| `marker-darkblue` | `mapcontrol_marker_darkblue` |
-| `marker-red` | `mapcontrol_marker_red` |
-| `marker-yellow` | `mapcontrol_marker_yellow` |
+| `marker-default` | `azure_maps_marker_default` |
+| `marker-black` | `azure_maps_marker_black` |
+| `marker-blue` | `azure_maps_marker_blue` |
+| `marker-darkblue` | `azure_maps_marker_darkblue` |
+| `marker-red` | `azure_maps_marker_red` |
+| `marker-yellow` | `azure_maps_marker_yellow` |
-You can also override the border color of all markers using the `mapcontrol_marker_border` color resource name. The colors of these markers can be overridden by adding a color with the same name in the `colors.xml` file of your app. For example, the following `colors.xml` file would make the default marker color bright green.
+You can also override the border color of all markers using the `azure_maps_marker_border` color resource name. The colors of these markers can be overridden by adding a color with the same name in the `colors.xml` file of your app. For example, the following `colors.xml` file would make the default marker color bright green.
```xml <?xml version="1.0" encoding="utf-8"?> <resources>
- <color name="mapcontrol_marker_default">#00FF00</color>
+ <color name="azure_maps_marker_default">#00FF00</color>
</resources> ```
-The following is a modified version of the default marker vector XML that you can modify to create additional custom versions of the default marker. The modified version can be added to the `drawable` folder of your app and added to the maps image sprite using `map.images.add`, then used with a symbol layer.
+The following code is a modified version of the default marker vector XML that you can modify to create custom versions of the default marker. The modified version can be added to the `drawable` folder of your app and added to the maps image sprite using `map.images.add`, then used with a symbol layer.
```xml <vector xmlns:android="http://schemas.android.com/apk/res/android"
The following is a modified version of the default marker vector XML that you ca
<path android:pathData="M12.25,0.25a12.2543,12.2543 0,0 0,-12 12.4937c0,6.4436 6.4879,12.1093 11.059,22.5641 0.5493,1.2563 1.3327,1.2563 1.882,0C17.7621,24.8529 24.25,19.1857 24.25,12.7437A12.2543,12.2543 0,0 0,12.25 0.25Z" android:strokeWidth="0.5"
- android:fillColor="@color/mapcontrol_marker_default"
- android:strokeColor="@color/mapcontrol_marker_border"/>
+ android:fillColor="@color/azure_maps_marker_default"
+ android:strokeColor="@color/azure_maps_marker_border"/>
</vector> ```
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Create a data source](create-data-source-android-sdk.md)
+> [!div class="nextstepaction"]
+> [Cluster point data](clustering-point-data-android-sdk.md)
+ > [!div class="nextstepaction"] > [Add a bubble layer](map-add-bubble-layer-android.md)
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-render-custom-data.md
To check the status of the data upload and retrieve its unique ID (`udid`):
https://us.atlas.microsoft.com/mapData/operations/{statusUrl}?api-version=2.0&subscription-key={subscription-key} ```
-6. Select **Send**.
+6. Using Postman, make a GET request with the above URL. In the response header retrieve the operations metadata URL from the `Resource-Location` property. This URI will be of the following format.
+
+ ```HTTP
+ https://us.atlas.microsoft.com/mapData/metadata/{uid}?api-version=2.0
+ ```
+
+7. Copy the operations metadata URI and append the subscription-key parameter to it with the value of your Azure Maps account subscription key. Use the same account subscription key that you used to upload the data. The status URI format should look like the one below:
+
+ ```HTTP
+ https://us.atlas.microsoft.com/mapData/metadata/{uid}?api-version=2.0?api-version=1.0&subscription-key={Subscription-key}
+ ```
+
+8. To get the udId, open a new tab in the Postman app. Select GET HTTP method on the builder tab. Make a GET request at the status URI. If your data upload was successful, you'll receive a udId in the response body. Copy the udId.
-7. In the response window, select the **Headers** tab.
+9. Select **Send**.
+
+10. In the response window, select the **Headers** tab.
-8. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource.
+11. Copy the value of the **Resource-Location** key, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the drawing package resource.
:::image type="content" source="./media/how-to-render-custom-data/resource-location-url.png" alt-text="Copy the resource location URL.":::
azure-maps How To Show Traffic Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-show-traffic-android.md
There are two types of traffic data available in Azure Maps:
- Incident data - consists of point and line-based data for things such as construction, road closures, and accidents. - Flow data - provides metrics on the flow of traffic on the roads. Often, traffic flow data is used to color the roads. The colors are based on how much traffic is slowing down the flow, relative to the speed limit, or another metric. There are four values that can be passed into the traffic `flow` option of the map.
- |Flow Value | Description|
+ |Flow enum | Description|
| :-- | :-- |
- | TrafficFlow.NONE | Doesn't display traffic data on the map |
- | TrafficFlow.RELATIVE | Shows traffic data that's relative to the free-flow speed of the road |
- | TrafficFlow.RELATIVE_DELAY | Displays areas that are slower than the average expected delay |
- | TrafficFlow.ABSOLUTE | Shows the absolute speed of all vehicles on the road |
+ | `TrafficFlow.NONE` | Doesn't display traffic data on the map |
+ | `TrafficFlow.RELATIVE` | Shows traffic data that's relative to the free-flow speed of the road |
+ | `TrafficFlow.RELATIVE_DELAY` | Displays areas that are slower than the average expected delay |
+ | `TrafficFlow.ABSOLUTE` | Shows the absolute speed of all vehicles on the road |
The following code shows how to display traffic data on the map.
The following screenshot shows the above code rendering real-time traffic inform
![Map showing real-time traffic information with a toast message displaying incident details](media/how-to-show-traffic-android/android-traffic-details.png)
+## Filter traffic incidents
+
+On a typical day in most major cities, there can be an overwhelming number of traffic incidents, however, depending on your scenario, it may be desirable to filter and display a subset of these incidents. When setting traffic options, there are `incidentCategoryFilter` and `incidentMagnitudeFilter` options that take in an array of incident categories or magnitude enumerators or string values.
+
+The following table shows all the traffic incident categories that can be used within the `incidentCategoryFilter` option.
+
+| Category enum | String value | Description |
+|--|--|-|
+| `IncidentCategory.UNKNOWN` | `"unknown"` | An incident that either doesn't fit any of the defined categories or hasn't yet been classified. |
+| `IncidentCategory.ACCIDENT` | `"accident"` | Traffic accident. |
+| `IncidentCategory.FOG` | `"fog"` | Fog that reduces visibility, likely reducing traffic flow, and possibly increasing the risk of an accident. |
+| `IncidentCategory.DANGEROUS_CONDITIONS` | `"dangerousConditions"` | Dangerous situation on the road, such as an object on the road. |
+| `IncidentCategory.RAIN` | `"rain"` | Heavy rain that may be reducing visibility, making driving conditions difficult, and possibly increasing the risk of an accident. |
+| `IncidentCategory.ICE` | `"ice"` | Icy road conditions that may make driving difficult or dangerous. |
+| `IncidentCategory.JAM` | `"jam"` | Traffic jam resulting in slower moving traffic. |
+| `IncidentCategory.LANE_CLOSED` | `"laneClosed"` | A road lane is closed. |
+| `IncidentCategory.ROAD_CLOSED` | `"roadClosed"` | A road is closed. |
+| `IncidentCategory.ROAD_WORKS` | `"roadWorks"` | Road works/construction in this area. |
+| `IncidentCategory.WIND` | `"wind"` | High winds that may make driving difficult for vehicles with a large side profile or high center of gravity. |
+| `IncidentCategory.FLOODING` | `"flooding"` | Flooding occurring on road. |
+| `IncidentCategory.DETOUR` | `"detour"` | Traffic being directed to take a detour. |
+| `IncidentCategory.CLUSTER` | `"cluster"` | A cluster of traffic incidents of different categories. Zooming in the map will result in the cluster breaking apart into its individual incidents. |
+| `IncidentCategory.BROKEN_DOWN_VEHICLE` | `"brokenDownVehicle"` | Broken down vehicle on or beside road. |
+
+The following table shows all the traffic incident magnitudes that can be used within the `incidentMagnitudeFilter` option.
+
+| Magnitude enum | String value | Description |
+|--|--|-|
+| `IncidentMagnitude.UNKNOWN` | `"unknown"` | An incident who's magnitude hasn't yet been classified. |
+| `IncidentMagnitude.MINOR` | `"minor"` | A minor traffic issue that is often just for information and has minimal impact to traffic flow. |
+| `IncidentMagnitude.MODERATE` | `"moderate"` | A moderate traffic issue that has some impact on traffic flow. |
+| `IncidentMagnitude.MAJOR` | `"major"` | A major traffic issue that has a significant impact to traffic flow. |
+
+The following filters traffic incidents such that only moderate traffic jams and incidents with dangerous conditions are displayed on the map.
++
+``` java
+map.setTraffic(
+ incidents(true),
+ incidentMagnitudeFilter(new String[] { IncidentMagnitude.MODERATE }),
+ incidentCategoryFilter(new String[] { IncidentCategory.DANGEROUS_CONDITIONS, IncidentCategory.JAM })
+);
+```
+++
+```kotlin
+map.setTraffic(
+ incidents(true),
+ incidentMagnitudeFilter(*arrayOf(IncidentMagnitude.MODERATE)),
+ incidentCategoryFilter(
+ *arrayOf(
+ IncidentCategory.DANGEROUS_CONDITIONS,
+ IncidentCategory.JAM
+ )
+ )
+)
+```
++
+The following screenshot shows a map of moderate traffic jams and incidents with dangerous conditions.
+
+![Map of moderate traffic jams and incidents with dangerous conditions.](media/how-to-show-traffic-android/android-traffic-incident-filters.jpg)
+
+> [!NOTE]
+> Some traffic incidents may have multiple categories assigned to them. If an incident has any category that matches any option passed into `incidentCategoryFilter`, it will be displayed. The primary incident category may be different from the categories specified in the filter and thus display a different icon.
+ ## Next steps View the following guides to learn how to add more data to your map:
azure-maps How To Use Android Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-use-android-map-control-library.md
Be sure to complete the steps in the [Quickstart: Create an Android app](quick-a
The Azure Maps Android SDK provides three different ways of setting the language and regional view of the map. The following code shows how to set the language to French ("fr-FR") and the regional view to "Auto".
-The first option is to pass the language and view regional information into the `AzureMaps` class using the static `setLanguage` and `setView` methods globally. This will set the default language and regional view across all Azure Maps controls loaded in your app.
+The first option is to pass the language and view regional information into the `AzureMaps` class using the static `setLanguage` and `setView` methods globally. This code will set the default language and regional view across all Azure Maps controls loaded in your app.
::: zone pivot="programming-language-java-android"
companion object {
The second option is to pass the language and view information into the map control XML. ```XML
-<com.microsoft.azure.maps.mapcontrol.MapControl
+<com.azure.android.maps.control.MapControl
android:id="@+id/myMap" android:layout_width="match_parent" android:layout_height="match_parent"
- app:mapcontrol_language="fr-FR"
- app:mapcontrol_view="Auto"
+ app:azure_maps_language="fr-FR"
+ app:azure_maps_view="Auto"
/> ```
-The third option is to programmatically set the language and regional view of the map using the maps `setStyle` method. This can be done at any time to change the language and regional view of the map.
+The third option is to programmatically set the language and regional view of the map using the maps `setStyle` method. This method of changing the language and regional view of the map can be done at any time.
::: zone pivot="programming-language-java-android"
AzureMaps.setDomain("atlas.azure.us")
Be sure to use Azure Maps authentication details from the Azure Government cloud platform when authenticating the map and services.
+## Migrating from a preview version
+
+With the move from preview to general availability, some breaking changes were introduced into the Azure Maps Android SDK. The following are the key details:
+
+* The maven identifier changed from `"com.microsoft.azure.maps:mapcontrol:0.7"` to `"com.azure.android:azure-maps-control:1.0.0"`. The namespace and the major version number has changed.
+* The import namespace has changed from `com.microsoft.azure.maps.mapcontrol` to `com.azure.android.maps.control`
+* Resource names for XML options, color resources, and image resources have had the text `mapcontrol_` replaced with `azure_maps_`.
+
+ **Before:**
+
+ ```xml
+ <com.microsoft.azure.maps.mapcontrol.MapControl
+ android:id="@+id/myMap"
+ android:layout_width="match_parent"
+ android:layout_height="match_parent"
+ app:mapcontrol_language="fr-FR"
+ app:mapcontrol_view="Auto"
+ app:mapcontrol_centerLat="47.602806"
+ app:mapcontrol_centerLng="-122.329330"
+ app:mapcontrol_zoom="12"
+ />
+ ```
+
+ **After:**
+
+ ```xml
+ <com.azure.android.maps.control.MapControl
+ android:id="@+id/myMap"
+ android:layout_width="match_parent"
+ android:layout_height="match_parent"
+ app:azure_maps_language="fr-FR"
+ app:azure_maps_view="Auto"
+ app:azure_maps_centerLat="47.602806"
+ app:azure_maps_centerLng="-122.329330"
+ app:azure_maps_zoom="12"
+ />
+ ```
+ ## Next steps Learn how to add overlay data on the map:
azure-maps Map Add Bubble Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-bubble-layer-android.md
map.layers.add(
::: zone-end
-The following screenshot shows the above code rending a point in a bubble layer and a text label for the point with a symbol layer.
+The following screenshot shows the above code rendering a point in a bubble layer and a text label for the point with a symbol layer.
![Map with point rendered using a bubble layer and a text label with symbol layer](media/map-add-bubble-layer-android/android-bubble-symbol-layer.png)
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Create a data source](create-data-source-android-sdk.md)
+> [!div class="nextstepaction"]
+> [Cluster point data](clustering-point-data-android-sdk.md)
+ > [!div class="nextstepaction"] > [Add a symbol layer](how-to-add-symbol-to-android-map.md)
azure-maps Map Add Controls Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-controls-android.md
The screenshot below is of a traffic control loaded on a map.
![Traffic control added to map](media/map-add-controls-android/android-traffic-control.jpg)
-## A Map with all controls
+## A map with all controls
-Multiple controls can be put into an array and added to the map all at once and positioned in the same area of the map to simplify development. The following adds the standard navigation controls to the map using this approach.
+Multiple controls can be put into an array and added to the map all at once and positioned in the same area of the map to simplify development. The following code adds the standard navigation controls to the map using this approach.
::: zone pivot="programming-language-java-android"
map.controls.add(
::: zone-end
-The screenshot below shows all controls loaded on a map. Note that the order they are added to the map, is the order they will appear.
+The screenshot below shows all controls loaded on a map. The order they are added to the map, is the order they will appear.
![All controls added to map](media/map-add-controls-android/android-all-controls.jpg)
azure-maps Map Add Heat Map Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-heat-map-layer-android.md
zone_pivot_groups: azure-maps-android
# Add a heat map layer (Android SDK)
-Heat maps, also known as point density maps, are a type of data visualization. They're used to represent the density of data using a range of colors and show the data "hot spots" on a map. Heat maps are a great way to render datasets with large number of points.
+Heat maps, also known as point density maps, are a type of data visualization. They're used to represent the density of data using a range of colors and show the data "hot spots" on a map. Heat maps are a great way to render datasets with large number of points.
Rendering tens of thousands of points as symbols can cover most of the map area. This case likely results in many symbols overlapping. Making it difficult to gain a better understanding of the data. However, visualizing this same dataset as a heat map makes it easy to see the density and the relative density of each data point.
The following code sample loads a GeoJSON feed of earthquakes from the past week
```java //Create a data source and add it to the map. DataSource source = new DataSource();+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson");
+
+//Add data source to the map.
map.sources.add(source); //Create a heat map layer.
HeatMapLayer layer = new HeatMapLayer(source,
//Add the layer to the map, below the labels. map.layers.add(layer, "labels");-
-//Import the geojson data and add it to the data source.
-Utils.importData("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson",
- this,
- (String result) -> {
- //Parse the data as a GeoJSON Feature Collection.
- FeatureCollection fc = FeatureCollection.fromJson(result);
-
- //Add the feature collection to the data source.
- source.add(fc);
-
- //Optionally, update the maps camera to focus in on the data.
-
- //Calculate the bounding box of all the data in the Feature Collection.
- BoundingBox bbox = MapMath.fromData(fc);
-
- //Update the maps camera so it is focused on the data.
- map.setCamera(
- bounds(bbox),
- padding(20));
- });
``` ::: zone-end
Utils.importData("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_
```kotlin //Create a data source and add it to the map. val source = DataSource()+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson")
+
+//Add data source to the map.
map.sources.add(source) //Create a heat map layer.
val layer = HeatMapLayer(
//Add the layer to the map, below the labels. map.layers.add(layer, "labels")-
-//Import the geojson data and add it to the data source.
-Utils.importData("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson",
- this
-) { result: String? ->
- //Parse the data as a GeoJSON Feature Collection.
- val fc = FeatureCollection.fromJson(result!!)
-
- //Add the feature collection to the data source.
- source.add(fc)
-
- //Optionally, update the maps camera to focus in on the data.
- //Calculate the bounding box of all the data in the Feature Collection.
- val bbox = MapMath.fromData(fc)
-
- //Update the maps camera so it is focused on the data.
- map.setCamera(
- bounds(bbox),
- padding(20)
- )
-}
``` ::: zone-end
The previous example customized the heat map by setting the radius and opacity o
- `heatmapRadius`: Defines a pixel radius in which to render each data point. You can set the radius as a fixed number or as an expression. By using an expression, you can scale the radius based on the zoom level, and represent a consistent spatial area on the map (for example, a 5-mile radius). - `heatmapColor`: Specifies how the heat map is colorized. A color gradient is a common feature of heat maps. You can achieve the effect with an `interpolate` expression. You can also use a `step` expression for colorizing the heat map, breaking up the density visually into ranges that resemble a contour or radar style map. These color palettes define the colors from the minimum to the maximum density value.
- You specify color values for heat maps as an expression on the `heatmapDensity` value. The color of area where there's no data is defined at index 0 of the "Interpolation" expression, or the default color of a "Stepped" expression. You can use this value to define a background color. Often, this value is set to transparent, or a semi-transparent black.
+ You specify color values for heat maps as an expression on the `heatmapDensity` value. The color of area where there's no data is defined at index 0 of the "Interpolation" expression, or the default color of a "Stepped" expression. You can use this value to define a background color. Often, this value is set to transparent, or a semi-transparent black.
Here are examples of color expressions:
The following video shows a map running the above code, which scales the radius
![Animation showing a map zooming with a heat map layer showing a consistent geospatial size](media/map-add-heat-map-layer-android/android-consistent-zoomable-heat-map-layer.gif) +
+> [!TIP]
+> When you enable clustering on the data source, points that are close to one another are grouped together as a clustered point. You can use the point count of each cluster as the weight expression for the heat map. This can significantly reduce the number of points to be rendered. The point count of a cluster is stored in a `point_count` property of the point feature:
+>
+> ```java
+> HeatMapLayer layer = new HeatMapLayer(dataSource,
+> heatmapWeight(get("point_count"))
+> );
+> ```
+>
+> If the clustering radius is only a few pixels, there would be a small visual difference in the rendering. A larger radius groups more points into each cluster, and improves the performance of the heatmap.
+++
+> [!TIP]
+> When you enable clustering on the data source, points that are close to one another are grouped together as a clustered point. You can use the point count of each cluster as the weight expression for the heat map. This can significantly reduce the number of points to be rendered. The point count of a cluster is stored in a `point_count` property of the point feature:
+>
+> ```kotlin
+> var layer = new HeatMapLayer(dataSource,
+> heatmapWeight(get("point_count"))
+> )
+> ```
+>
+> If the clustering radius is only a few pixels, there would be a small visual difference in the rendering. A larger radius groups more points into each cluster, and improves the performance of the heatmap.
++ ## Next steps For more code examples to add to your maps, see the following articles:
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-heat-map-layer.md
# Add a heat map layer
-Heat maps, also known as point density maps, are a type of data visualization. They're used to represent the density of data using a range of colors and show the data "hot spots" on a map. Heat maps are a great way to render datasets with large number of points.
+Heat maps, also known as point density maps, are a type of data visualization. They're used to represent the density of data using a range of colors and show the data "hot spots" on a map. Heat maps are a great way to render datasets with large number of points.
Rendering tens of thousands of points as symbols can cover most of the map area. This case likely results in many symbols overlapping. Making it difficult to gain a better understanding of the data. However, visualizing this same dataset as a heat map makes it easy to see the density and the relative density of each data point.
Here's the complete running code sample of the preceding code.
The previous example customized the heat map by setting the radius and opacity options. The heat map layer provides several options for customization, including: * `radius`: Defines a pixel radius in which to render each data point. You can set the radius as a fixed number or as an expression. By using an expression, you can scale the radius based on the zoom level, and represent a consistent spatial area on the map (for example, a 5-mile radius).
-* `color`: Specifies how the heat map is colorized. A color gradient is a common feature of heat maps. You can achieve the effect with an `interpolate` expression. You can also use a `step` expression for colorizing the heat map, breaking up the density visually into ranges that resemble a contour or radar style map. These color palettes define the colors from the minimum to the maximum density value.
+* `color`: Specifies how the heat map is colorized. A color gradient is a common feature of heat maps. You can achieve the effect with an `interpolate` expression. You can also use a `step` expression for colorizing the heat map, breaking up the density visually into ranges that resemble a contour or radar style map. These color palettes define the colors from the minimum to the maximum density value.
+
+ You specify color values for heat maps as an expression on the `heatmap-density` value. The color of area where there's no data is defined at index 0 of the "Interpolation" expression, or the default color of a "Stepped" expression. You can use this value to define a background color. Often, this value is set to transparent, or a semi-transparent black.
- You specify color values for heat maps as an expression on the `heatmap-density` value. The color of area where there's no data is defined at index 0 of the "Interpolation" expression, or the default color of a "Stepped" expression. You can use this value to define a background color. Often, this value is set to transparent, or a semi-transparent black.
-
Here are examples of color expressions: | Interpolation color expression | Stepped color expression | |--|--|
- | \[<br/>&nbsp;&nbsp;&nbsp;&nbsp;'interpolate',<br/>&nbsp;&nbsp;&nbsp;&nbsp;\['linear'\],<br/>&nbsp;&nbsp;&nbsp;&nbsp;\['heatmap-density'\],<br/>&nbsp;&nbsp;&nbsp;&nbsp;0, 'transparent',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.01, 'purple',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.5, '#fb00fb',<br/>&nbsp;&nbsp;&nbsp;&nbsp;1, '#00c3ff'<br/>\] | \[<br/>&nbsp;&nbsp;&nbsp;&nbsp;'step',<br/>&nbsp;&nbsp;&nbsp;&nbsp;\['heatmap-density'\],<br/>&nbsp;&nbsp;&nbsp;&nbsp;'transparent',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.01, 'navy',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.25, 'green',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.50, 'yellow',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.75, 'red'<br/>\] |
+ | \[<br/>&nbsp;&nbsp;&nbsp;&nbsp;'interpolate',<br/>&nbsp;&nbsp;&nbsp;&nbsp;\['linear'\],<br/>&nbsp;&nbsp;&nbsp;&nbsp;\['heatmap-density'\],<br/>&nbsp;&nbsp;&nbsp;&nbsp;0, 'transparent',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.01, 'purple',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.5, '#fb00fb',<br/>&nbsp;&nbsp;&nbsp;&nbsp;1, '#00c3ff'<br/>\] | \[<br/>&nbsp;&nbsp;&nbsp;&nbsp;'step',<br/>&nbsp;&nbsp;&nbsp;&nbsp;\['heatmap-density'\],<br/>&nbsp;&nbsp;&nbsp;&nbsp;'transparent',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.01, 'navy',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.25, 'green',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.50, 'yellow',<br/>&nbsp;&nbsp;&nbsp;&nbsp;0.75, 'red'<br/>\] |
- `opacity`: Specifies how opaque or transparent the heat map layer is. - `intensity`: Applies a multiplier to the weight of each data point to increase the overall intensity of the heatmap. It causes a difference in the weight of data points, making it easier to visualize.-- `weight`: By default, all data points have a weight of 1, and are weighted equally. The weight option acts as a multiplier, and you can set it as a number or an expression. If a number is set as the weight, it's the equivalence of placing each data point on the map twice. For instance, if the weight is 2, then the density doubles. Setting the weight option to a number renders the heat map in a similar way to using the intensity option.
+- `weight`: By default, all data points have a weight of 1, and are weighted equally. The weight option acts as a multiplier, and you can set it as a number or an expression. If a number is set as the weight, it's the equivalence of placing each data point on the map twice. For instance, if the weight is 2, then the density doubles. Setting the weight option to a number renders the heat map in a similar way to using the intensity option.
However, if you use an expression, the weight of each data point can be based on the properties of each data point. For example, suppose each data point represents an earthquake. The magnitude value has been an important metric for each earthquake data point. Earthquakes happen all the time, but most have a low magnitude, and aren't noticed. Use the magnitude value in an expression to assign the weight to each data point. By using the magnitude value to assign the weight, you get a better representation of the significance of earthquakes within the heat map. - `source` and `source-layer`: Enable you to update the data source.
Here's a tool to test out the different heat map layer options.
## Consistent zoomable heat map
-By default, the radii of data points rendered in the heat map layer have a fixed pixel radius for all zoom levels. As you zoom the map, the data aggregates together and the heat map layer looks different.
+By default, the radii of data points rendered in the heat map layer have a fixed pixel radius for all zoom levels. As you zoom the map, the data aggregates together and the heat map layer looks different.
-Use a `zoom` expression to scale the radius for each zoom level, such that each data point covers the same physical area of the map. This expression makes the heat map layer look more static and consistent. Each zoom level of the map has twice as many pixels vertically and horizontally as the previous zoom level.
+Use a `zoom` expression to scale the radius for each zoom level, such that each data point covers the same physical area of the map. This expression makes the heat map layer look more static and consistent. Each zoom level of the map has twice as many pixels vertically and horizontally as the previous zoom level.
Scaling the radius so that it doubles with each zoom level creates a heat map that looks consistent on all zoom levels. To apply this scaling, use `zoom` with a base 2 `exponential interpolation` expression, with the pixel radius set for the minimum zoom level and a scaled radius for the maximum zoom level calculated as `2 * Math.pow(2, minZoom - maxZoom)` as shown in the following sample. Zoom the map to see how the heat map scales with the zoom level.
Scaling the radius so that it doubles with each zoom level creates a heat map th
</iframe> > [!TIP]
-> When you enable clustering on the data source, points that are close to one another are grouped together as a clustered point. You can use the point count of each cluster as the weight expression for the heat map. This can significantly reduce the number of points to be rendered. The point count of a cluster is stored in a `point_count` property of the point feature:
+> When you enable clustering on the data source, points that are close to one another are grouped together as a clustered point. You can use the point count of each cluster as the weight expression for the heat map. This can significantly reduce the number of points to be rendered. The point count of a cluster is stored in a `point_count` property of the point feature:
> ```JavaScript > var layer = new atlas.layer.HeatMapLayer(datasource, null, { > weight: ['get', 'point_count']
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-extruded-polygon-android.md
A choropleth map can be rendered using the polygon extrusion layer. Set the `hei
```java //Create a data source and add it to the map. DataSource source = new DataSource();
-map.sources.add(source);
//Import the geojson data and add it to the data source.
-Utils.importData("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/US_States_Population_Density.json", this, (String result) -> {
- //Parse the data as a GeoJSON Feature Collection.
- FeatureCollection fc = FeatureCollection.fromJson(result);
+source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/US_States_Population_Density.json");
- //Add the feature collection to the data source.
- source.add(fc);
-});
+//Add data source to the map.
+map.sources.add(source);
//Create and add a polygon extrusion layer to the map below the labels so that they are still readable. PolygonExtrusionLayer layer = new PolygonExtrusionLayer(source,
map.layers.add(layer, "labels");
```kotlin //Create a data source and add it to the map. val source = DataSource()
-map.sources.add(source)
//Import the geojson data and add it to the data source.
-Utils.importData("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/US_States_Population_Density.json",
- this
-) { result: String? ->
- //Parse the data as a GeoJSON Feature Collection.
- val fc = FeatureCollection.fromJson(result!!)
-
- //Add the feature collection to the data source.
- source.add(fc)
-}
+source.importDataFromUrl("https://azuremapscodesamples.azurewebsites.net/Common/data/geojson/US_States_Population_Density.json")
+
+//Add data source to the map.
+map.sources.add(source)
//Create and add a polygon extrusion layer to the map below the labels so that they are still readable. val layer = PolygonExtrusionLayer(
azure-maps Migrate From Google Maps Android App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps-android-app.md
To display a map using the Azure Maps SDK for Android, the following steps need
3. Update your dependencies block. Add a new implementation dependency line for the latest Azure Maps Android SDK: ```gradel
- implementation "com.microsoft.azure.maps:mapcontrol:0.7"
+ implementation "com.azure.android:azure-maps-control:1.0.0"
``` > [!Note]
To display a map using the Azure Maps SDK for Android, the following steps need
android:layout_height="match_parent" >
- <com.microsoft.azure.maps.mapcontrol.MapControl
+ <com.azure.android.maps.control.MapControl
android:id="@+id/mapcontrol" android:layout_width="match_parent" android:layout_height="match_parent"
To display a map using the Azure Maps SDK for Android, the following steps need
package com.example.myapplication; import androidx.appcompat.app.AppCompatActivity;
- import com.microsoft.azure.maps.mapcontrol.AzureMaps;
- import com.microsoft.azure.maps.mapcontrol.MapControl;
- import com.microsoft.azure.maps.mapcontrol.layer.SymbolLayer;
- import com.microsoft.azure.maps.mapcontrol.options.MapStyle;
- import com.microsoft.azure.maps.mapcontrol.source.DataSource;
+ import com.azure.android.maps.control.AzureMaps;
+ import com.azure.android.maps.control.MapControl;
+ import com.azure.android.maps.control.layer.SymbolLayer;
+ import com.azure.android.maps.control.options.MapStyle;
+ import com.azure.android.maps.control.source.DataSource;
public class MainActivity extends AppCompatActivity {
To display a map using the Azure Maps SDK for Android, the following steps need
import androidx.appcompat.app.AppCompatActivity import android.os.Bundle
- import com.microsoft.azure.maps.mapcontrol.AzureMap
- import com.microsoft.azure.maps.mapcontrol.AzureMaps
- import com.microsoft.azure.maps.mapcontrol.MapControl
- import com.microsoft.azure.maps.mapcontrol.events.OnReady
+ import com.azure.android.maps.control.AzureMap
+ import com.azure.android.maps.control.AzureMaps
+ import com.azure.android.maps.control.MapControl
+ import com.azure.android.maps.control.events.OnReady
class MainActivity : AppCompatActivity() {
companion object {
The second option is to pass the language and view information to the map control XML code. ```xml
-<com.microsoft.azure.maps.mapcontrol.MapControl
+<com.azure.android.maps.control.MapControl
android:id="@+id/myMap" android:layout_width="match_parent" android:layout_height="match_parent"
- app:mapcontrol_language="fr-FR"
- app:mapcontrol_view="Auto"
+ app:azure_maps_language="fr-FR"
+ app:azure_maps_view="Auto"
/> ```
As noted previously, to achieve the same viewable area in Azure Maps subtract th
The initial map view can be set in XML attributes on the map control. ```xml
-<com.microsoft.azure.maps.mapcontrol.MapControl
+<com.azure.android.maps.control.MapControl
android:id="@+id/myMap" android:layout_width="match_parent" android:layout_height="match_parent"
- app:mapcontrol_cameraLat="35.0272"
- app:mapcontrol_cameraLng="-111.0225"
- app:mapcontrol_zoom="14"
- app:mapcontrol_style="satellite"
+ app:azure_maps_cameraLat="35.0272"
+ app:azure_maps_cameraLng="-111.0225"
+ app:azure_maps_zoom="14"
+ app:azure_maps_style="satellite"
/> ```
No resources to be cleaned up.
## Next steps
-Learn more about migrating Azure Maps:
+Learn more about the Azure Maps Android SDK:
> [!div class="nextstepaction"]
-> [Migrate an Android app](migrate-from-google-maps-android-app.md)
+> [Getting started with Azure Maps Android SDK](how-to-use-android-map-control-library.md)
azure-maps Power Bi Visual Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/power-bi-visual-getting-started.md
The Azure Maps visual for Power BI provides a rich set of data visualizations fo
## What is sent to Azure?
-The Azure Maps visual connects to cloud service hosted in Azure to retrieve location data such as map images and coordinates that are used to create the map visualization.
+The Azure Maps visual connects to cloud service hosted in Azure to retrieve location data such as map images and coordinates that are used to create the map visualization.
-- Details about the area the map is focused on are sent to Azure to retrieve images needed to render the map canvas (also known as map tiles). -- Data in the Location, Latitude, and Longitude buckets may be sent to Azure to retrieve map coordinates (a process called geocoding). -- Telemetry data may be collected on the health of the visual (i.e. crash reports), if the telemetry option in Power BI is enabled.
+- Details about the area the map is focused on are sent to Azure to retrieve images needed to render the map canvas (also known as map tiles).
+- Data in the Location, Latitude, and Longitude buckets may be sent to Azure to retrieve map coordinates (a process called geocoding).
+- Telemetry data may be collected on the health of the visual (for example, crash reports), if the telemetry option in Power BI is enabled.
Other than the scenarios described above, no other data overlaid on the map is sent to the Azure Maps servers. All rendering of data happens locally within the client.
-You, or your administrator, may need to update your firewall to allow access to the Azure Maps platform which uses the following URL.
+You, or your administrator, may need to update your firewall to allow access to the Azure Maps platform that uses the following URL.
> `https://atlas.microsoft.com`
To learn more, about privacy and terms of use related to the Azure Maps visual s
There are a few considerations and requirements for **Azure Maps** visual. : -- The **Azure Maps** visual (Preview) must be enabled in Power BI Desktop. To enable **Azure Maps** visual, select **File** &gt; **Options and Settings** &gt; **Options** &gt; **Preview features**, then select the **Azure Maps Visual** checkbox. If the Azure Maps visual is not available after doing this, it's likely that a tenant admin switch in the Admin Portal needs to be enabled.-- The data set must have fields that contain **latitude** and **longitude** information. Geocoding of location fields will be added in a future update.-- The built-in legend control for Power BI does not currently appear in this preview. It will be added in a future update.
+- The **Azure Maps** visual (Preview) must be enabled in Power BI Desktop. To enable **Azure Maps** visual, select **File** &gt; **Options and Settings** &gt; **Options** &gt; **Preview features**, then select the **Azure Maps Visual** checkbox. If the Azure Maps visual is not available after enabling this setting, it's likely that a tenant admin switch in the Admin Portal needs to be enabled.
+- The data set must have fields that contain **latitude** and **longitude** information.
## Use the Azure Maps visual (Preview)
Once the **Azure Maps** visual is enabled, select the **Azure Maps** icon from
![Azure Maps visual button on the Visualizations pane](media/power-bi-visual/azure-maps-in-visualizations-pane.png)
-Power BI creates an empty Azure Maps visual design canvas. While in preview, an additional disclaimer is displayed.
+Power BI creates an empty Azure Maps visual design canvas. While in preview, another disclaimer is displayed.
![Power BI desktop with the Azure Maps visual loaded in its initial state](media/power-bi-visual/visual-initial-load.png) Take the following steps to load the Azure Maps visual:
-1. In the **Fields** pane, drag data fields that contain latitude and longitude coordinate information into the **Latitude** and/or **Longitude** buckets. This is the minimal data needed to load the Azure Maps visual.
-
+1. In the **Fields** pane, drag data fields that contain latitude and longitude coordinate information into the **Latitude** and/or **Longitude** buckets. This is the minimal data needed to load the Azure Maps visual.
+ > [!div class="mx-imgBorder"] > ![Azure Maps visual displaying points as bubbles on the map after latitude and longitude fields provided](media/power-bi-visual/bubble-layer.png)
-2. To color the data based on categorization, drag a categorical field into the **Legend** bucket of the **Fields** pane. In this example, we're using the **AdminDistrict** column (also known as state or province).
-
+2. To color the data based on categorization, drag a categorical field into the **Legend** bucket of the **Fields** pane. In this example, we're using the **AdminDistrict** column (also known as state or province).
+ > [!div class="mx-imgBorder"] > ![Azure Maps visual displaying points as colored bubbles on the map after legend field provided](media/power-bi-visual/bubble-layer-with-legend-color.png) > [!NOTE]
- > The built-in legend control for Power BI does not currently appear in this preview. It will be added in a future update.
+ > The built-in legend control for Power BI does not currently appear in this preview.
+
+3. To scale the data relatively, drag a measure into the **Size** bucket of the **Fields** pane. In this example, we're using **Sales** column.
-3. To scale the data relatively, drag a measure into the **Size** bucket of the **Fields** pane. In this example, we're using **Sales** column.
-
> [!div class="mx-imgBorder"] > ![Azure Maps visual displaying points as colored and scaled bubbles on the map after size field provided.](media/power-bi-visual/bubble-layer-with-legend-color-and-size.png)
-4. Use the options in the **Format** pane to customize how data is rendered. The following image is the same map as above, but with the bubble layers fill transparency option set to 50% and the high-contrast outline option enabled.
-
+4. Use the options in the **Format** pane to customize how data is rendered. The following image is the same map as above, but with the bubble layers fill transparency option set to 50% and the high-contrast outline option enabled.
+ > [!div class="mx-imgBorder"] > ![Azure Maps visual displaying points as bubbles on the map with a custom style](media/power-bi-visual/bubble-layer-styled.png)
The following data buckets are available in the **Fields** pane of the Azure Map
|--|--| | Latitude | The field used to specify the latitude value of the data points. Latitude values should be between -90 and 90 in decimal degrees format. | | Longitude | The field used to specify the longitude value of the data points. Longitude values should be between -180 and 180 in decimal degrees format. |
-| Legend | The field used to categorize the data and assign a unique color for data points in each category. When this bucket is filled, a **Data colors** section will appear in the **Format** pane which allows adjustments to the colors. |
+| Legend | The field used to categorize the data and assign a unique color for data points in each category. When this bucket is filled, a **Data colors** section will appear in the **Format** pane that allows adjustments to the colors. |
| Size | The measure used for relative sizing of data points on the map. |
-| Tooltips | Additional data fields that are displayed in tooltips when shapes are hovered. |
+| Tooltips | Other data fields to display in tooltips when shapes are hovered. |
## Map settings
The **Map settings** section of the Format pane provide options for customizing
| Setting | Description | ||--|
-| Auto zoom | Automatically zooms the map into the data loaded through the **Fields** pane of the visual. As the data changes, the map will update its position accordingly. When the slider is in the **Off** position, additional map view settings are displayed for the default map view. |
+| Auto zoom | Automatically zooms the map into the data loaded through the **Fields** pane of the visual. As the data changes, the map will update its position accordingly. When the slider is in the **Off** position, more map view settings are displayed for the default map view. |
| World wrap | Allows the user to pan the map horizontally infinitely. | | Style picker | Adds a button to the map that allows the report readers to change the style of the map. |
-| Navigation controls | Adds buttons to the map as another method to allow the report readers to zoom, rotate, and change the pitch of the map. For more information, see this document on [Navigating the map](map-accessibility.md#navigating-the-map) for details on all the different ways users can navigate the map. |
-| Map style | The style of the map. For more information, see this document for more information on [supported map styles](supported-map-styles.md). |
+| Navigation controls | Adds buttons to the map as another method to allow the report readers to zoom, rotate, and change the pitch of the map. See this document on [Navigating the map](map-accessibility.md#navigating-the-map) for details on all the different ways users can navigate the map. |
+| Map style | The style of the map. See the [supported map styles](supported-map-styles.md) document for more information. |
+| Selection control | Adds a button that allows the user to choose between different modes to select data on the map; circle, rectangle, polygon (lasso), or travel time or distance. When drawing a polygon, to complete the drawing; click on the first point, or double-click the map on the last point, or press the `c` key. |
### Map view settings
The Azure Maps visual is available in the following services and applications:
| Power BI Embedded | No | | Power BI service embedding (PowerBI.com) | Yes |
-Support for additional Power BI services/apps will be added in future updates.
- **Where is Azure Maps available?** At this time, Azure Maps is currently available in all countries and regions except the following:
Customize the visual:
> [Tips and tricks for color formatting in Power BI](/power-bi/visuals/service-tips-and-tricks-for-color-formatting) > [!div class="nextstepaction"]
-> [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)
+> [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/quick-android-map.md
The next step in building your application is to install the Azure Maps Android
3. Update your dependencies block and add a new implementation dependency line for the latest Azure Maps Android SDK: ```gradle
- implementation "com.microsoft.azure.maps:mapcontrol:0.7"
+ implementation "com.azure.android:azure-maps-control:1.0.0"
``` > [!Note]
The next step in building your application is to install the Azure Maps Android
3. Add a map fragment to the main activity (res \> layout \> activity\_main.xml): ```xml
- <com.microsoft.azure.maps.mapcontrol.MapControl
+ <com.azure.android.maps.control.MapControl
android:id="@+id/mapcontrol" android:layout_width="match_parent" android:layout_height="match_parent"
The next step in building your application is to install the Azure Maps Android
package com.example.myapplication; import androidx.appcompat.app.AppCompatActivity;
- import com.microsoft.azure.maps.mapcontrol.AzureMaps;
- import com.microsoft.azure.maps.mapcontrol.MapControl;
- import com.microsoft.azure.maps.mapcontrol.layer.SymbolLayer;
- import com.microsoft.azure.maps.mapcontrol.options.MapStyle;
- import com.microsoft.azure.maps.mapcontrol.source.DataSource;
+ import com.azure.android.maps.control.AzureMaps;
+ import com.azure.android.maps.control.MapControl;
+ import com.azure.android.maps.control.layer.SymbolLayer;
+ import com.azure.android.maps.control.options.MapStyle;
+ import com.azure.android.maps.control.source.DataSource;
public class MainActivity extends AppCompatActivity {
The next step in building your application is to install the Azure Maps Android
import androidx.appcompat.app.AppCompatActivity import android.os.Bundle
- import com.microsoft.azure.maps.mapcontrol.AzureMap
- import com.microsoft.azure.maps.mapcontrol.AzureMaps
- import com.microsoft.azure.maps.mapcontrol.MapControl
- import com.microsoft.azure.maps.mapcontrol.events.OnReady
+ import com.azure.android.maps.control.AzureMap
+ import com.azure.android.maps.control.AzureMaps
+ import com.azure.android.maps.control.MapControl
+ import com.azure.android.maps.control.events.OnReady
class MainActivity : AppCompatActivity() {
azure-maps Set Android Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/set-android-map-styles.md
Be sure to complete the steps in the [Quickstart: Create an Android app](quick-a
You can set a map style in the layout file for your activity class when adding the map control. The following code sets the center location, zoom level, and map style. ```XML
-<com.microsoft.azure.maps.mapcontrol.MapControl
+<com.azure.android.maps.control.MapControl
android:id="@+id/mapcontrol" android:layout_width="match_parent" android:layout_height="match_parent"
- app:mapcontrol_centerLat="47.602806"
- app:mapcontrol_centerLng="-122.329330"
- app:mapcontrol_zoom="12"
- app:mapcontrol_style="grayscale_dark"
+ app:azure_maps_centerLat="47.602806"
+ app:azure_maps_centerLng="-122.329330"
+ app:azure_maps_zoom="12"
+ app:azure_maps_style="grayscale_dark"
/> ```
map.setCamera(
The aspect ratio of a bounding box may not be the same as the aspect ratio of the map, as such the map will often show the full bounding box area, but will often only be tight vertically or horizontally.
+### Animate map view
+
+When setting the camera options of the map, animation options can also be used to create a transition between the current map view and the next. These options specify the type of animation and duration it should take to move the camera.
+
+| Option | Description |
+|--|-|
+| `animationDuration(Integer durationMs)` | Specifies how long the camera will animate between the views in milliseconds (ms). |
+| `animationType(AnimationType animationType)` | Specifies the type of animation transition to perform.<br/><br/> - `JUMP` - an immediate change.<br/> - `EASE` - gradual change of the camera's settings.<br/> - `FLY` - gradual change of the camera's settings following an arc resembling flight. |
+
+The following code shows how to animate the map view using a `FLY` animation over a duration of three seconds.
++
+``` java
+map.setCamera(
+ center(Point.fromLngLat(-122.33, 47.6)),
+ zoom(12),
+ animationType(AnimationType.FLY),
+ animationDuration(3000)
+);
+```
+++
+```kotlin
+map.setCamera(
+ center(Point.fromLngLat(-122.33, 47.6)),
+ zoom(12.0),
+ AnimationOptions.animationType(AnimationType.FLY),
+ AnimationOptions.animationDuration(3000)
+)
+```
++
+The following demonstrates the above code animating the map view from New York to Seattle.
+
+![Map animating the camera from New York to Seattle](media/set-android-map-styles/android-animate-camera.gif)
+ ## Next steps See the following articles for more code samples to add to your maps:
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/supported-map-styles.md
This map style is a hybrid of roads and labels overlaid on top of satellite and
**grayscale light** is a light version of the road map style.
-![grayscale light map style](./media/supported-map-styles/grayscale-light.png)
+![grayscale light map style](./media/supported-map-styles/grayscale-light.jpg)
**Applicable APIs:** * Web SDK map control
This map style is a hybrid of roads and labels overlaid on top of satellite and
**Applicable APIs:** * Web SDK map control
+* Android map control
* Power BI visual
+## high_contrast_light
+
+**high_contrast_light** is a light map style with a higher contrast than the other styles.
+
+![high contrast light map style](./media/supported-map-styles/high-contrast-light.jpg)
+
+**Applicable APIs:**
+
+* Web SDK map control
+* Android map control
+* Power BI visual
+
+## Map style accessibility
+
+The interactive Azure Maps map controls use vector tiles in the map styles to power the screen reader to describe the area the map is displaying. Several map styles are also designed to be fully accessible when it comes to color contrast. The following table provides details on the accessibility features supported by each map style.
+
+| Map style | Color contrast | Screen reader | Notes |
+||-||-|
+| `blank` | N/A | No | A blank canvas useful for developers who want to use their own tiles as the base map, or want to view their data without any background. The screen reader will not rely on the vector tiles for descriptions. |
+| `blank_accessible` | N/A | Yes | Under the hood this map style continues to load the vector tiles used to render the map, but makes that data transparent. This way the data will still be loaded, and can be used to power the screen reader. |
+| `grayscale_dark` | Partial | Yes | This map style is primarily designed for business intelligence scenarios but useful for overlaying colorful layers such as weather radar imagery. |
+| `grayscale_light` | Partial | Yes | This map style is primarily designed for business intelligence scenarios. |
+| `high_contrast_dark` | Yes | Yes | Fully accessible map style for users in high contrast mode with a dark setting. When the map loads, high contrast settings will automatically be detected. |
+| `high_contrast_light` | Yes | Yes | Fully accessible map style for users in high contrast mode with a light setting. When the map loads, high contrast settings will automatically be detected. |
+| `night` | Partial | Yes | This style is designed for when the user is in low light conditions and you donΓÇÖt want to overwhelm their senses with a bright map. |
+| `road` | Partial | Yes | This is the main colorful road map style in Azure Maps. Due to the number of different colors and possible overlapping color combinations, it's nearly impossible to make it 100% accessible. That said, this map style goes through regular accessibility testing and is improved as needed to make labels clearer to read. |
+| `road_shaded_relief` | Partial | Yes | This is nearly the same style the main road map style, but has an added tile layer in the background that adds shaded relief of mountains and land cover coloring when zoomed out at higher levels. |
+| `satellite` | N/A | Yes | Purely satellite and aerial imagery, no labels, or road lines. The vector tiles are loaded behind the scenes to power the screen reader and to make for a smoother transition when switching to `satellite_with_roads`. |
+| `satellite_with_roads` | No | Yes | Satellite and aerial imagery, with labels and road lines overlaid. On a global scale, there is an unlimited number of color combinations that may occur between the overlaid data and the imagery. A focus on making labels readable in most common scenarios, however, in some places the color contrast with the background imagery may make labels difficult to read. |
+ ## Next steps Learn about how to set a map style in Azure Maps:
-[Choose a map style](./choose-map-style.md)
+> [!div class="nextstepaction"]
+> [Choose a map style](./choose-map-style.md)
azure-maps Tutorial Load Geojson File Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/tutorial-load-geojson-file-android.md
This tutorial guides you through the process of importing a GeoJSON file of loca
> * Add Azure Maps to an Android application. > * Create a data source and load in a GeoJSON file from a local file or the web. > * Display the data on the map.
+> * Interact with the data on the maps to view its details.
## Prerequisites
This tutorial guides you through the process of importing a GeoJSON file of loca
### Import GeoJSON data from web or assets folder
-Most GeoJSON files wrap all data within a `FeatureCollection`. With this in mind, if the GeoJSON files are loaded into the application as a string, they can be passed into the feature collection's static `fromJson` method, which will deserialize the string into a GeoJSON `FeatureCollection` object that can be added to the map.
+Most GeoJSON files wrap all data within a `FeatureCollection`. With this scenario in mind, if the GeoJSON files are loaded into the application as a string, they can be passed into the feature collection's static `fromJson` method, which will deserialize the string into a GeoJSON `FeatureCollection` object that can be added to the map.
The following steps show you how to import a GeoJSON file into the application and deserialize it as a GeoJSON `FeatureCollection` object.
The following steps show you how to import a GeoJSON file into the application a
::: zone pivot="programming-language-java-android"
-4. Create a new file called **Utils.java** and add the following code to that file. This code provides a static method called `importData` that asynchronously imports a file from the `assets` folder of the application or from the web using a URL as a string and returns it back to the UI thread using a simple callback method.
+4. Go into the **MainActivity.java** file and add the following code inside the callback for the `mapControl.onReady` event, inside the `onCreate` method. This code loads the **SamplePoiDataSet.json** file from the assets folder into a data source using `importDataFromUrl` method and then adds it to the map.
- ```java
- //Modify the package name as needed to align with your application.
- package com.example.myapplication;
-
- import android.content.Context;
- import android.os.Handler;
- import android.os.Looper;
- import android.webkit.URLUtil;
-
- import java.io.BufferedReader;
- import java.io.IOException;
- import java.io.InputStream;
- import java.io.InputStreamReader;
- import java.net.URL;
- import java.util.concurrent.ExecutorService;
- import java.util.concurrent.Executors;
-
- import javax.net.ssl.HttpsURLConnection;
-
- public class Utils {
-
- interface SimpleCallback {
- void notify(String result);
- }
-
- /**
- * Imports data from a web url or asset file name and returns it to a callback.
- * @param urlOrFileName A web url or asset file name that points to data to load.
- * @param context The context of the app.
- * @param callback The callback function to return the data to.
- */
- public static void importData(String urlOrFileName, Context context, SimpleCallback callback){
- importData(urlOrFileName, context, callback, null);
- }
-
- /**
- * Imports data from a web url or asset file name and returns it to a callback.
- * @param urlOrFileName A web url or asset file name that points to data to load.
- * @param context The context of the app.
- * @param callback The callback function to return the data to.
- * @param error A callback function to return errors to.
- */
- public static void importData(String urlOrFileName, Context context, SimpleCallback callback, SimpleCallback error){
- if(urlOrFileName != null && callback != null) {
- ExecutorService executor = Executors.newSingleThreadExecutor();
- Handler handler = new Handler(Looper.getMainLooper());
-
- executor.execute(() -> {
- String data = null;
-
- try {
-
- if(URLUtil.isNetworkUrl(urlOrFileName)){
- data = importFromWeb(urlOrFileName);
- } else {
- //Assume file is in assets folder.
- data = importFromAssets(context, urlOrFileName);
- }
-
- final String result = data;
-
- handler.post(() -> {
- //Ensure the resulting data string is not null or empty.
- if (result != null && !result.isEmpty()) {
- callback.notify(result);
- } else {
- error.notify("No data imported.");
- }
- });
- } catch(Exception e) {
- if(error != null){
- error.notify(e.getMessage());
- }
- }
- });
- }
- }
-
- /**
- * Imports data from an assets file as a string.
- * @param context The context of the app.
- * @param fileName The asset file name.
- * @return
- * @throws IOException
- */
- private static String importFromAssets(Context context, String fileName) throws IOException {
- InputStream stream = null;
-
- try {
- stream = context.getAssets().open(fileName);
-
- if(stream != null) {
- return readStreamAsString(stream);
- }
- } catch (Exception e) {
- e.printStackTrace();
- } finally {
- // Close Stream and disconnect HTTPS connection.
- if (stream != null) {
- stream.close();
- }
- }
-
- return null;
- }
-
- /**
- * Imports data from the web as a string.
- * @param url URL to the data.
- * @return
- * @throws IOException
- */
- private static String importFromWeb(String url) throws IOException {
- InputStream stream = null;
- HttpsURLConnection connection = null;
- String result = null;
-
- try {
- connection = (HttpsURLConnection) new URL(url).openConnection();
-
- //For this use case, set HTTP method to GET.
- connection.setRequestMethod("GET");
-
- //Open communications link (network traffic occurs here).
- connection.connect();
-
- int responseCode = connection.getResponseCode();
- if (responseCode != HttpsURLConnection.HTTP_OK) {
- throw new IOException("HTTP error code: " + responseCode);
- }
-
- //Retrieve the response body as an InputStream.
- stream = connection.getInputStream();
-
- if (stream != null) {
- return readStreamAsString(stream);
- }
- } catch (Exception e) {
- e.printStackTrace();
- } finally {
- // Close Stream and disconnect HTTPS connection.
- if (stream != null) {
- stream.close();
- }
- if (connection != null) {
- connection.disconnect();
- }
- }
-
- return result;
- }
-
- /**
- * Reads an input stream as a string.
- * @param stream Stream to convert.
- * @return
- * @throws IOException
- */
- private static String readStreamAsString(InputStream stream) throws IOException {
- //Convert the contents of an InputStream to a String.
- BufferedReader in = new BufferedReader(new InputStreamReader(stream, "UTF-8"));
-
- String inputLine;
- StringBuffer response = new StringBuffer();
-
- while ((inputLine = in.readLine()) != null) {
- response.append(inputLine);
- }
-
- in.close();
-
- return response.toString();
- }
- }
- ```
+```java
+//Create a data source and add it to the map.
+DataSource source = new DataSource();
-5. Go into the **MainActivity.java** file and add the following code inside the callback for the `mapControl.onReady` event, this is located inside the `onCreate` method. This code uses the import utility to read in the **SamplePoiDataSet.json** file as a string, then deserializes it as a feature collection using the static `fromJson` method of the `FeatureCollection` class. This code also calculates the bounding box area for all the data in the feature collection and uses this to set the camera of the map to focus in on the data.
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("asset://SamplePoiDataSet.json");
- ```java
- //Create a data source and add it to the map.
- DataSource source = new DataSource();
- map.sources.add(source);
-
- //Import the GeoJSON data and add it to the data source.
- Utils.importData("SamplePoiDataSet.json",
- this,
- (String result) -> {
- //Parse the data as a GeoJSON Feature Collection.
- FeatureCollection fc = FeatureCollection.fromJson(result);
-
- //Add the feature collection to the data source.
- source.add(fc);
-
- //Optionally, update the maps camera to focus in on the data.
-
- //Calculate the bounding box of all the data in the Feature Collection.
- BoundingBox bbox = MapMath.fromData(fc);
-
- //Update the maps camera so it is focused on the data.
- map.setCamera(
- bounds(bbox),
+//Add data source to the map.
+map.sources.add(source);
+```
+++
+4. Go into the **MainActivity.kt** file and add the following code inside the callback for the `mapControl.onReady` event, inside the `onCreate` method. This code loads the **SamplePoiDataSet.json** file from the assets folder into a data source using `importDataFromUrl` method and then adds it to the map.
+
+```kotlin
+//Create a data source and add it to the map.
+DataSource source = new DataSource();
+
+//Import the geojson data and add it to the data source.
+source.importDataFromUrl("asset://SamplePoiDataSet.json");
+
+//Add data source to the map.
+map.sources.add(source);
+```
- //Padding added to account for pixel size of rendered points.
- padding(20)
- );
- });
- ```
+
+5. Using the code to load the GeoJSON data a data source, we now need to specify how that data should be displayed on the map. There are several different rendering layers for point data; [Bubble layer](map-add-bubble-layer-android.md), [Symbol layer](how-to-add-symbol-to-android-map.md), and [Heat map layer](map-add-heat-map-layer-android.md) are the most commonly used layers. Add the following code to render the data in a bubble layer in the callback for the `mapControl.onReady` event after the code for importing the data.
-6. Using the code to load the GeoJSON data a data source, we now need to specify how that data should be displayed on the map. There are several different rendering layers for point data; [Bubble layer](map-add-bubble-layer-android.md), [Symbol layer](how-to-add-symbol-to-android-map.md), and [Heat map layer](map-add-heat-map-layer-android.md) are the most commonly used layers. Add the following code to render the data in a bubble layer in the callback for the `mapControl.onReady` event after the code for importing the data.
- ```java
- //Create a layer and add it to the map.
- BubbleLayer layer = new BubbleLayer(source);
- map.layers.add(layer);
- ```
+```java
+//Create a layer and add it to the map.
+BubbleLayer layer = new BubbleLayer(source);
+map.layers.add(layer);
+```
::: zone-end ::: zone pivot="programming-language-kotlin"
-4. Create a new file called **Utils.kt** and add the following code to that file. This code provides a static method called `importData` that asynchronously imports a file from the `assets` folder of the application or from the web using a URL as a string and returns it back to the UI thread using a simple callback method.
+```kotlin
+//Create a layer and add it to the map.
+val layer = new BubbleLayer(source)
+map.layers.add(layer)
+```
- ```kotlin
- //Modify the package name as needed to align with your application.
- package com.example.myapplication;
- import android.content.Context
- import android.os.Handler
- import android.os.Looper
- import android.webkit.URLUtil
- import java.net.URL
- import java.util.concurrent.ExecutorService
- import java.util.concurrent.Executors
-
- class Utils {
- companion object {
-
- /**
- * Imports data from a web url or asset file name and returns it to a callback.
- * @param urlOrFileName A web url or asset file name that points to data to load.
- * @param context The context of the app.
- * @param callback The callback function to return the data to.
- */
- fun importData(urlOrFileName: String?, context: Context, callback: (String?) -> Unit) {
- importData(urlOrFileName, context, callback, null)
- }
-
- /**
- * Imports data from a web url or asset file name and returns it to a callback.
- * @param urlOrFileName A web url or asset file name that points to data to load.
- * @param context The context of the app.
- * @param callback The callback function to return the data to.
- * @param error A callback function to return errors to.
- */
- public fun importData(urlOrFileName: String?, context: Context, callback: (String?) -> Unit, error: ((String?) -> Unit)?) {
- if (urlOrFileName != null && callback != null) {
- val executor: ExecutorService = Executors.newSingleThreadExecutor()
- val handler = Handler(Looper.getMainLooper())
- executor.execute {
- var data: String? = null
-
- try {
- data = if (URLUtil.isNetworkUrl(urlOrFileName)) {
- URL(urlOrFileName).readText()
- } else { //Assume file is in assets folder.
- context.assets.open(urlOrFileName).bufferedReader().use{
- it.readText()
- }
- }
-
- handler.post {
- //Ensure the resulting data string is not null or empty.
- if (data != null && !data.isEmpty()) {
- callback(data)
- } else {
- error!!("No data imported.")
- }
- }
- } catch (e: Exception) {
- error!!(e.message)
- }
- }
- }
- }
- }
- }
- ```
-
-5. Go into the **MainActivity.kt** file and add the following code inside the callback for the `mapControl.onReady` event, this is located inside the `onCreate` method. This code uses the import utility to read in the **SamplePoiDataSet.json** file as a string, then deserializes it as a feature collection using the static `fromJson` method of the `FeatureCollection` class. This code also calculates the bounding box area for all the data in the feature collection and uses this to set the camera of the map to focus in on the data.
-
- ```kotlin
- //Create a data source and add it to the map.
- DataSource source = new DataSource();
- map.sources.add(source);
-
- //Import the GeoJSON data and add it to the data source.
- Utils.importData("SamplePoiDataSet.json", this) {
- result: String? ->
- //Parse the data as a GeoJSON Feature Collection.
- val fc = FeatureCollection.fromJson(result!!)
-
- //Add the feature collection to the data source.
- source.add(fc)
-
- //Optionally, update the maps camera to focus in on the data.
+6. In the project panel of Android studio, right-click on the **layout** folder under the path `app > res > layout` and go to `New > File`. Create a new file called **popup_text.xml**.
+7. Open the **popup_text.xml** file. If the file opens in a designer view, right-click on the screen and select "Go to XML". Copy and paste the following XML into this file. This XML creates a simple layout that can be used with a popup and contains a text view.
+
+```xml
+<?xml version="1.0" encoding="utf-8"?>
+<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
+ android:layout_width="match_parent"
+ android:orientation="vertical"
+ android:background="#ffffff"
+ android:layout_margin="8dp"
+ android:padding="10dp"
+
+ android:layout_height="match_parent">
+
+ <TextView
+ android:id="@+id/message"
+ android:layout_width="wrap_content"
+ android:text=""
+ android:textSize="18dp"
+ android:textColor="#222"
+ android:layout_height="wrap_content"
+ android:width="200dp"/>
+
+</RelativeLayout>
+```
++
+8. Go back into the **MainActivity.java** file and after the code for the bubble layer, add the following code to create a reusable popup.
+
+```java
+//Create a popup and add it to the map.
+Popup popup = new Popup();
+map.popups.add(popup);
+
+//Close it initially.
+popup.close();
+```
+++
+8. Go back into the **MainActivity.kt** file and after the code for the bubble layer, add the following code to create a reusable popup.
+
+```kotlin
+//Create a popup and add it to the map.
+val popup = Popup()
+map.popups.add(popup)
- //Calculate the bounding box of all the data in the Feature Collection.
- val bbox = MapMath.fromData(fc);
+//Close it initially.
+popup.close()
+```
++
+9. Add the following code to attach a click event to the bubble layer. When a bubble in the bubble layer is tapped, the event will fire and retrieve some details from the properties of the selected feature, create a view using the **popup_text.xml** layout file, pass it in as content into the popup, then show the popup at the features position.
+
- //Update the maps camera so it is focused on the data.
- map.setCamera(
- bounds(bbox),
+```java
+//Add a click event to the layer.
+map.events.add((OnFeatureClick)(feature) -> {
+ //Get the first feature and it's properties.
+ Feature f = feature.get(0);
+ JsonObject props = f.properties();
- //Padding added to account for pixel size of rendered points.
- padding(20)
- )
- }
- ```
+ //Retrieve the custom layout for the popup.
+ View customView = LayoutInflater.from(this).inflate(R.layout.popup_text, null);
-6. Using the code to load the GeoJSON data a data source, we now need to specify how that data should be displayed on the map. There are several different rendering layers for point data; [Bubble layer](map-add-bubble-layer-android.md), [Symbol layer](how-to-add-symbol-to-android-map.md), and [Heat map layer](map-add-heat-map-layer-android.md) are the most commonly used layers. Add the following code to render the data in a bubble layer in the callback for the `mapControl.onReady` event after the code for importing the data.
+ //Display the name and entity type information of the feature into the text view of the popup layout.
+ TextView tv = customView.findViewById(R.id.message);
+ tv.setText("%s\n%s",
+ f.getStringProperty("Name"),
+ f.getStringProperty("EntityType")
+ );
+
+ //Get the position of the clicked feature.
+ Position pos = MapMath.getPosition((Point)f.geometry());
+
+ //Set the options on the popup.
+ popup.setOptions(
+ //Set the popups position.
+ position(pos),
+
+ //Set the anchor point of the popup content.
+ anchor(AnchorType.BOTTOM),
+
+ //Set the content of the popup.
+ content(customView)
+ );
+
+ //Open the popup.
+ popup.open();
+
+ //Return a boolean indicating if event should be consumed or continue to bubble up.
+ return false;
+}, layer);
+```
++
- ```kotlin
- //Create a layer and add it to the map.
- val layer = new BubbleLayer(source)
- map.layers.add(layer)
- ```
+```kotlin
+//Add a click event to the layer.
+map.events.add(OnFeatureClick { feature: List<Feature> ->
+ //Get the first feature and it's properties.
+ val f = feature[0]
+ val props = f.properties()
+
+ //Retrieve the custom layout for the popup.
+ val customView: View = LayoutInflater.from(this).inflate(R.layout.popup_text, null)
+
+ //Display the name and entity type information of the feature into the text view of the popup layout.
+ val tv = customView.findViewById<TextView>(R.id.message)
+ tv.text = String.format(
+ "%s\n%s",
+ f.getStringProperty("Name"),
+ f.getStringProperty("EntityType")
+ )
+
+ //Get the position of the clicked feature.
+ val pos = MapMath.getPosition(f.geometry() as Point?)
+
+ //Set the options on the popup.
+ popup.setOptions( //Set the popups position.
+ position(pos), //Set the anchor point of the popup content.
+ anchor(AnchorType.BOTTOM), //Set the content of the popup.
+ content(customView)
+ )
+
+ //Open the popup.
+ popup.open()
+
+ //Return a boolean indicating if event should be consumed or continue to bubble up.
+ false
+} as OnFeatureClick, layer)
+```
::: zone-end
-7. Run the application. A map will be displayed focused over the USA, with circles overlaid for each location in the GeoJSON file.
+10. Run the application. A map will be displayed with bubbles overlaid for each location in the GeoJSON file. Tapping on any bubble will display a popup with the name and entity type of the feature touched.
- ![Map of the USA with data from a GeoJSON file displayed](media/tutorial-load-geojson-file-android/android-import-geojson.png)
+ ![Map of data from a GeoJSON file displayed with a popup being opened when location tapped](media/tutorial-load-geojson-file-android/android-import-geojson.gif)
## Clean up resources
-Take to following steps to clean up the resources from this tutorial:
+Take the following steps to clean up the resources from this tutorial:
1. Close Android Studio and delete the application you created. 2. If you tested the application on an external device, uninstall the application from that device.
azure-monitor Azure Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/azure-web-apps.md
Targeting the full framework from ASP.NET Core, self-contained deployment, and L
# [Node.js](#tab/nodejs)
-Windows agent-based monitoring is not supported, to enable with Linux visit the [Node.js App Service documentation](../../app-service/configure-language-nodejs.md?pivots=platform-linux#monitor-with-application-insights).
- You can monitor your Node.js apps running in Azure App Service without any code change, just with a couple of simple steps. Application insights for Node.js applications is integrated with App Service on Linux - both code-based and custom containers, and with App Service on Windows for code-based apps. The integration is in public preview. The integration adds Node.js SDK, which is in GA. 1. **Select Application Insights** in the Azure control panel for your app service.
azure-percept How To Update Via Usb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/how-to-update-via-usb.md
Last updated 03/18/2021
-# How to update Azure Percept DK over a USB-C cable connection
+# Update the Azure Percept DK over a USB-C cable connection
-This guide will show you how to successfully update your dev kit's operating system and firmware over a USB connection. Here is an overview of what you will be doing during this procedure.
+This guide will show you how to successfully update your dev kit's operating system and firmware over a USB connection. Here's an overview of what you will be doing during this procedure.
1. Download the update package to a host computer 1. Run the command that transfers the update package to the dev kit
-1. Set the dev kit into "USB mode" (using SSH) so that it can be detected by the host computer and receive the update package
-1. Connect the dev kit to the host computer via the USB-C cable
+1. Set the dev kit into USB mode using SSH or DIP switches
+1. Connect the dev kit to the host computer via a USB-C cable
1. Wait for the update to complete > [!WARNING]
This guide will show you how to successfully update your dev kit's operating sys
- An Azure Percept DK - A Windows, Linux, or OS X based host computer with Wi-Fi capability and an available USB-C or USB-A port - A USB-C to USB-A cable (optional, sold separately)-- An SSH login, created during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
+- An SSH login account, created during the [Azure Percept DK setup experience](./quickstart-percept-dk-set-up.md)
+- A hex wrench, shipped with the dev kit, to remove the screws on the back of the dev kit (if using the DIP switch method)
## Download software tools and update files 1. [NXP UUU tool](https://github.com/NXPmicro/mfgtools/releases). Download the **Latest Release** uuu.exe file (for Windows) or the uuu file (for Linux) under the **Assets** tab. UUU is a tool created by NXP used to update NXP dev boards.
-1. [Download the update files](https://go.microsoft.com/fwlink/?linkid=2155734). They are all contained in a zip file that you will extract in the next section.
+1. [Download the update files](https://go.microsoft.com/fwlink/?linkid=2155734). They're all contained in a zip file that you'll extract in the next section.
1. Ensure all three build artifacts are present: - Azure-Percept-DK-*&lt;version number&gt;*.raw
This guide will show you how to successfully update your dev kit's operating sys
1. Extract the previously downloaded update files to the new folder that contains the UUU tool.
-## Update your device
-
-This procedure uses the dev kit's single USB-C port for updating. If your computer has a USB-C port, you can disconnect the Azure Percept Vision device and use that cable. If your computer only has a USB-A port, disconnect the Azure Percept Vision device from the dev kitΓÇÖs USB-C port and connect a USB-C to USB-A cable (sold separately) to the dev kit and host computer.
+## Run the command that transfers the update package to the dev kit
1. Open a Windows command prompt (Start > cmd) or a Linux terminal and **navigate to the folder where the update files and UUU tool are stored**.
This procedure uses the dev kit's single USB-C port for updating. If your compu
sudo ./uuu -b emmc_full.txt fast-hab-fw.raw Azure-Percept-DK-<version number>.raw ```
-1. The command prompt window will display a message that say "**Waiting for Known USB Device to Appear...**" The UUU tool is now waiting for the dev kit to be detected by the host computer. It is now ok to proceed to the next steps.
+1. The command prompt window will display a message that says **Waiting for Known USB Device to Appear...** The UUU tool is now waiting for the dev kit to be detected by the host computer. **Proceed to the next steps and put the dev kit into USB mode.**
+
+## Set the dev kit into USB mode
+There are two ways to set the dev kit into "USB mode," via SSH or by changing the DIP switches on the dev kit. Choose the method that works best for your situation.
+
+### Using SSH
+SSH is the safest and preferred method for setting the dev kit into USB mode. However, it does require you can connect to the dev kit's wi-fi access point. If you're unable to connect to the dev kit's wi-fi access point, then you'll need to use the DIP switch method.
1. Connect the supplied USB-C cable to the dev kit's USB-C port and to the host computer's USB-C port. If your computer only has a USB-A port, connect a USB-C to USB-A cable (sold separately) to the dev kit and host computer.
This procedure uses the dev kit's single USB-C port for updating. If your compu
sudo reboot -f ```
-1. Navigate back to the other command prompt or terminal. When the update is finished, you will see a message with ```Success 1 Failure 0```:
+### Using the DIP switch method
+Use the DIP switch method when you can't SSH into to the device.
+
+1. Unplug the dev board if it's plugged into the power cable.
+1. Remove the four screws on the back of the dev board using the hex wrench that was shipped with the dev kit.
+
+ :::image type="content" source="media/how-to-usb-update/dip-switch-01.jpg" alt-text="remove the four screws on the back of the dev board":::
+
+1. Gently slide the dev board in the direction of the LEDs. The heat sink will stay attached to the top of the dev board. Only slide the dev board 2 - 3 centimeters to avoid disconnecting any cables.
+
+ :::image type="content" source="media/how-to-usb-update/dip-switch-02.jpg" alt-text="slide the board over a few centimeters":::
+
+1. The DIP switches can be found on the corner of the board. There are four switches that each have two positions, up (1) or down (0). The default positions of the switches are up-down-down-up (1001). Using a paperclip or other fine-pointed instrument, change the positions of the switches to down-up-down-up (0101).
+
+ :::image type="content" source="media/how-to-usb-update/dip-switch-03.jpg" alt-text="find the switches on the lower corner of the board":::
+
+1. The dev kit is now in USB mode and you can continue with the next steps. **Once the update is completed, change the DIP switches back to the default position of up-down-down-up (1001).** Then slide the dev board back into position and reapply the four screws on the back.
+
+## Connect the dev kit to the host computer via a USB-C cable
+This procedure uses the dev kit's single USB-C port for updating. If your computer has a USB-C port, you can use the USB-C to USB-C cable that came with the dev kit. If your computer only has a USB-A port, you'll need to use a USB-C to USB-A cable (sold separately).
+
+1. Connect the dev kit to the host computer using the appropriate USB-C cable.
+1. The host computer should now detect the dev kit as a USB device. If you successfully ran the command that transfers the update package to the dev kit and your command prompt says Waiting for Known USB Device to Appear...,** then the update should automatically start in about 10 seconds.
+
+## Wait for the update to complete
+
+1. Navigate back to the other command prompt or terminal. When the update is finished, you'll see a message with ```Success 1 Failure 0```:
> [!NOTE] > After updating, your device will be reset to factory settings and you will lose your Wi-Fi connection and SSH login.
-1. Once the update is complete, power off the dev kit. Unplug the USB cable from the PC.ΓÇ»
+1. Once the update is complete, power off the dev kit. Unplug the USB cable from the PC.
+1. If you used the DIP switch method to put the dev kit into USB mode, be sure to put the DIP switches back to the default positions. Then slide the dev board back into position and reapply the four screws on the back. ΓÇ»
## Next steps
azure-percept Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-percept/known-issues.md
Last updated 03/25/2021
-# Known issues
+# Azure Percept known issues
-If you encounter any of these issues, it is not necessary to open a bug. If you have trouble with any of the workarounds, please open an issue.
+Here are issues with the Azure Percept DK, Azure Percept Audio, or Azure Percept Studio that the product teams are aware of. Workarounds and troubleshooting steps are provided where possible. If you're blocked by any of these issues, you can post it as a question on [Microsoft Q&A](https://docs.microsoft.com/answers/topics/azure-percept.html) or submit a customer support request in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-|Area|Description of Issue|Workaround|
-|-|||
-| Onboarding experience | CanΓÇÖt complete the onboarding experience unless deviceΓÇÖs Wi-Fi is configured (Azure login fails). | 1. [SSH into your Azure Percept DK](./how-to-ssh-into-percept-dk.md). <br> 2. Identify and copy the device's ethernet IP address. <br> 3. Connect to the onboarding experience using the ethernet IP-based URL. |
-| Onboarding experience | Clicking on links in the EULA (license agreement) sometimes does not open a new webpage. | Copy the link and open it in a separate browser window. |
-| Onboarding experience | Cannot work through the onboarding experience when connected to a mobile Wi-Fi hotspot. | Connect your device directly to the SoftAP, a Wi-Fi network, or to a network over ethernet. |
-| Wi-Fi | The SoftAP can sometimes disconnect or disappear. | We are investigating. Rebooting the device will typically bring it back. |
-| Wi-Fi | The hardware button that toggles the Wi-Fi SoftAP on and off sometimes does not work. | Continue pressing the button or reboot the device. |
-| Wi-Fi | Users may see a message after connecting to Wi-Fi: <br> "This Wi-Fi network uses an older security standard." | The devkit's SoftAP uses the WEP encryption algorithm. |
-| Wi-Fi | Unable to connect to the SoftAP from Windows 10 PC with the following error message: <br> "Can't connect to this network" | Reboot both the devkit and the computer. |
-| Device update | Containers do not run after an OTA update. | SSH into the device and restart the IoT Edge container with this command: `systemctl restart iotedge`. This will restart all containers. |
-| Device update | Users may get a message that the update failed, even if it succeeded. | Confirm the update by navigating to the devkit's Device Twin in IoT Hub and checking the value of `swVersion`. This is fixed after the first update. |
-| Device update | Users may lose their Wi-Fi connection settings after their first update. | Run through the onboarding experience after updating to set up the Wi-Fi connection. This is fixed after the first update. |
-| Device update | After performing an OTA update, users can no longer log on via SSH using previously created user accounts, and new SSH users cannot be created through the onboarding experience. This issue affects systems performing OTA updates from the following pre-installed image versions: 2020.110.114.105 and 2020.109.101.105. | To recover your user profiles, perform these steps after the OTA update: <br> [SSH into your devkit](./how-to-ssh-into-percept-dk.md) using ΓÇ£rootΓÇ¥ as the username. If you disabled the SSH ΓÇ£rootΓÇ¥ user login via the onboarding experience, you must re-enable it. Run this command after successfully connecting: <br> ```mkdir -p /var/custom-configs/home; chmod 755 /var/custom-configs/home``` <br> To recover previous user home data, run the following command: <br> ```mkdir -p /tmp/prev-rootfs && mount /dev/mmcblk0p3 /tmp/prev-rootfs && [ ! -L /tmp/prev-rootfs/home ] && cp -a /tmp/prev-rootfs/home/* /var/custom-configs/home/. && echo "User home migrated!"; umount /tmp/prev-rootfs``` |
-| Device update | After taking an OTA update, update groups are lost. | Update the deviceΓÇÖs tag by following [these instructions](./how-to-update-over-the-air.md#create-a-device-update-group). |
-| Dev Tools Pack Installer | Optional Caffe install may fail if Docker is not running properly on system. | Make sure Docker is installed and running, then retry Caffe installation. |
-| Dev Tools Pack Installer | Optional CUDA install fails on incompatible systems. | Verify system compatibility with CUDA prior to running installer. |
-| Docker, Network, IoT Edge | If your internal network uses 172.x.x.x, Docker containers will fail to connect to IoT Edge. | Add a special bip section to the /etc/docker/daemon.json file like this: `{ "bip": "192.168.168.1/24"}` |
-|Azure Percept Studio | "View stream" links within Azure Percept Studio do not open a new window showing the device's web stream. | 1. Open the [Azure portal](https://portal.azure.com) and select **IoT Hub**. <br> 2. Click on the IoT Hub to which your device is connected. <br> 3. Select **IoT Edge** under **Automatic Device Management** on your IoT Hub page. <br> 4. Select your device from the list. <br> 5. Select **Set modules** at the top of your device page. <br> 6. Click the trashcan icon next to **HostIpModule** to delete the module. <br> 7. To confirm the action, click **Review + create** and then **Create**. <br> 8. Open [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) and click **Devices** on the left menu panel. <br> 9. Select your device from the list. <br> 10. On the **Vision** tab, click **View your device stream**. Your device will download a new version of HostIpModule and open a browser tab with your device's web stream. |
+|Area|Symptoms|Description of Issue|Workaround|
+|-||||
+| Azure Percept DK | Unable to deploy the sample and demo models in Azure Percept Studio | Sometimes the azureeyemodule or azureearspeechmodule modules stop running. edgeAgent logs show "too many levels of symbolic links" error. | Reset your device by [updating it over USB](./how-to-update-via-usb.md) |
+| Localization | Non-English speaking users may see parts of the Azure Percept DK setup experience display English text. | The Azure Percept DK setup experience isn't fully localized. | Fix is scheduled for July 2021 |
+| Azure Percept DK | When going through the setup experience on a Mac, the setup experience my abruptly close after connecting to Wi-Fi. | When going through the setup experience on a Mac, it initially opens in a window rather than a web browser. The window isn't persisted once the connection switches from the device's access point to Wi-Fi. | Open a web browser and go to https://10.1.1.1, which will allow you to complete the setup experience. |
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
description: Describes how to conditionally deploy a resource in Bicep.
Previously updated : 06/01/2021 Last updated : 06/29/2021 # Conditional deployment in Bicep
resource sa 'Microsoft.Storage/storageAccounts@2019-06-01' = if (newOrExisting =
When the parameter `newOrExisting` is set to **new**, the condition evaluates to true. The storage account is deployed. However, when `newOrExisting` is set to **existing**, the condition evaluates to false and the storage account isn't deployed.
-For a complete example template that uses the `condition` element, see [VM with a new or existing Virtual Network, Storage, and Public IP](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-new-or-existing-conditions).
- ## Runtime functions If you use a [reference](./bicep-functions-resource.md#reference) or [list](./bicep-functions-resource.md#list) function with a resource that is conditionally deployed, the function is evaluated even if the resource isn't deployed. You get an error if the function refers to a resource that doesn't exist.
azure-sql Transparent Data Encryption Byok Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-configure.md
Previously updated : 03/12/2019 Last updated : 06/23/2021 # PowerShell and the Azure CLI: Enable Transparent Data Encryption with customer-managed key from Azure Key Vault [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)]
-This article walks through how to use a key from Azure Key Vault for Transparent Data Encryption (TDE) on Azure SQL Database or Azure Synapse Analytics. To learn more about the TDE with Azure Key Vault integration - Bring Your Own Key (BYOK) Support, visit [TDE with customer-managed keys in Azure Key Vault](transparent-data-encryption-byok-overview.md).
+This article walks through how to use a key from Azure Key Vault for Transparent Data Encryption (TDE) on Azure SQL Database or Azure Synapse Analytics. To learn more about the TDE with Azure Key Vault integration - Bring Your Own Key (BYOK) Support, visit [TDE with customer-managed keys in Azure Key Vault](transparent-data-encryption-byok-overview.md).
> [!NOTE] > Azure SQL now supports using a RSA key stored in a Managed HSM as TDE Protector. This feature is in **public preview**. Azure Key Vault Managed HSM is a fully managed, highly available, single-tenant, standards-compliant cloud service that enables you to safeguard cryptographic keys for your cloud applications, using FIPS 140-2 Level 3 validated HSMs. Learn more about [Managed HSMs](../../key-vault/managed-hsm/index.yml).
+> [!NOTE]
+> This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse workspaces, see [Azure Synapse Analytics encryption](../../synapse-analytics/security/workspaces-encryption.md).
+ ## Prerequisites for PowerShell - You must have an Azure subscription and be an administrator on that subscription.
azure-sql Transparent Data Encryption Byok Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-key-rotation.md
Previously updated : 03/12/2019 Last updated : 06/23/2021 # Rotate the Transparent Data Encryption (TDE) protector [!INCLUDE[appliesto-sqldb-asa](../includes/appliesto-sqldb-asa.md)]
This guide discusses two options to rotate the TDE protector on the server.
> [!IMPORTANT] > Do not delete previous versions of the key after a rollover. When keys are rolled over, some data is still encrypted with the previous keys, such as older database backups.
+> [!NOTE]
+> This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse workspaces, see [Azure Synapse Analytics encryption](../../synapse-analytics/security/workspaces-encryption.md).
+ ## Prerequisites - This how-to guide assumes that you are already using a key from Azure Key Vault as the TDE protector for Azure SQL Database or Azure Synapse Analytics. See [Transparent Data Encryption with BYOK Support](transparent-data-encryption-byok-overview.md).
azure-sql Transparent Data Encryption Byok Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-overview.md
Previously updated : 02/01/2021 Last updated : 06/23/2021 # Azure SQL Transparent Data Encryption with customer-managed key [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)]
In this scenario, the key used for encryption of the Database Encryption Key (DE
For Azure SQL Database and Azure Synapse Analytics, the TDE protector is set at the server level and is inherited by all encrypted databases associated with that server. For Azure SQL Managed Instance, the TDE protector is set at the instance level and is inherited by all encrypted databases on that instance. The term *server* refers both to a server in SQL Database and Azure Synapse and to a managed instance in SQL Managed Instance throughout this document, unless stated differently.
+> [!NOTE]
+> This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse workspaces, see [Azure Synapse Analytics encryption](../../synapse-analytics/security/workspaces-encryption.md).
+ > [!IMPORTANT] > For those using service-managed TDE who would like to start using customer-managed TDE, data remains encrypted during the process of switching over, and there is no downtime nor re-encryption of the database files. Switching from a service-managed key to a customer-managed key only requires re-encryption of the DEK, which is a fast and online operation. > [!NOTE]
-> <a id="doubleencryption"></a> To provide Azure SQL customers with two layers of encryption of data at rest, infrastructure encryption (using AES-256 encryption algorithm) with platform managed keys is being rolled out. This provides an addition layer of encryption at rest along with TDE with customer-managed keys, which is already available. For Azure SQL Database and Managed Instance, all databases, including the master database and other system databases, will be encrypted when infrastructure encryption is turned on.
-At this time, customers must request access to this capability. If you are interested in this capability, contact AzureSQLDoubleEncryptionAtRest@service.microsoft.com .
+> <a id="doubleencryption"></a> To provide Azure SQL customers with two layers of encryption of data at rest, infrastructure encryption (using AES-256 encryption algorithm) with platform managed keys is being rolled out. This provides an addition layer of encryption at rest along with TDE with customer-managed keys, which is already available. For Azure SQL Database and Managed Instance, all databases, including the master database and other system databases, will be encrypted when infrastructure encryption is turned on. At this time, customers must request access to this capability. If you are interested in this capability, contact AzureSQLDoubleEncryptionAtRest@service.microsoft.com.
+ ## Benefits of the customer-managed TDE
azure-sql Transparent Data Encryption Byok Remove Tde Protector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-remove-tde-protector.md
Previously updated : 02/24/2020 Last updated : 06/23/2021 # Remove a Transparent Data Encryption (TDE) protector using PowerShell [!INCLUDE[appliesto-sqldb-asa](../includes/appliesto-sqldb-asa.md)]
If a key is ever suspected to be compromised, such that a service or user had un
Keep in mind that once the TDE protector is deleted in Key Vault, in up to 10 minutes, all encrypted databases will start denying all connections with the corresponding error message and change its state to [Inaccessible](./transparent-data-encryption-byok-overview.md#inaccessible-tde-protector).
-This how-to guide goes over two approaches depending on the desired result after a compromised incident response:
+This how-to guide goes over the approach to render databases **inaccessible** after a compromised incident response.
-- To make the databases in Azure SQL Database / Azure Synapse Analytics **inaccessible**.-- To make the databases in Azure SQL Database / Azure Azure Synapse Analytics **inaccessible**.
+> [!NOTE]
+> This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse workspaces, see [Azure Synapse Analytics encryption](../../synapse-analytics/security/workspaces-encryption.md).
## Prerequisites
azure-sql Transparent Data Encryption Tde Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-tde-overview.md
Previously updated : 10/12/2020 Last updated : 06/23/2021 # Transparent data encryption for SQL Database, SQL Managed Instance, and Azure Synapse Analytics [!INCLUDE[appliesto-sqldb-sqlmi-asa](../includes/appliesto-sqldb-sqlmi-asa.md)] [Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics against the threat of malicious offline activity by encrypting data at rest. It performs real-time encryption and decryption of the database, associated backups, and transaction log files at rest without requiring changes to the application. By default, TDE is enabled for all newly deployed SQL Databases and must be manually enabled for older databases of Azure SQL Database, Azure SQL Managed Instance. TDE must be manually enabled for Azure Synapse Analytics.
+> [!NOTE]
+> This article applies to Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics (dedicated SQL pools (formerly SQL DW)). For documentation on Transparent Data Encryption for dedicated SQL pools inside Synapse workspaces, see [Azure Synapse Analytics encryption](../../synapse-analytics/security/workspaces-encryption.md).
+ TDE performs real-time I/O encryption and decryption of the data at the page level. Each page is decrypted when it's read into memory and then encrypted before being written to disk. TDE encrypts the storage of an entire database by using a symmetric key called the Database Encryption Key (DEK). On database startup, the encrypted DEK is decrypted and then used for decryption and re-encryption of the database files in the SQL Server database engine process. DEK is protected by the TDE protector. TDE protector is either a service-managed certificate (service-managed transparent data encryption) or an asymmetric key stored in [Azure Key Vault](../../key-vault/general/security-features.md) (customer-managed transparent data encryption). For Azure SQL Database and Azure Synapse, the TDE protector is set at the [server](logical-servers.md) level and is inherited by all databases associated with that server. For Azure SQL Managed Instance, the TDE protector is set at the instance level and it is inherited by all encrypted databases on that instance. The term *server* refers both to server and instance throughout this document, unless stated differently.
For Azure SQL Database and Azure Synapse, the TDE protector is set at the [serve
> [!NOTE] > TDE cannot be used to encrypt system databases, such as the **master** database, in Azure SQL Database and Azure SQL Managed Instance. The **master** database contains objects that are needed to perform the TDE operations on the user databases. It is recommended to not store any sensitive data in the system databases. [Infrastructure encryption](transparent-data-encryption-byok-overview.md#doubleencryption) is now being rolled out which encrypts the system databases including master. + ## Service-managed transparent data encryption In Azure, the default setting for TDE is that the DEK is protected by a built-in server certificate. The built-in server certificate is unique for each server and the encryption algorithm used is AES 256. If a database is in a geo-replication relationship, both the primary and geo-secondary databases are protected by the primary database's parent server key. If two databases are connected to the same server, they also share the same built-in certificate. Microsoft automatically rotates these certificates in compliance with the internal security policy and the root key is protected by a Microsoft internal secret store. Customers can verify SQL Database and SQL Managed Instance compliance with internal security policies in independent third-party audit reports available on the [Microsoft Trust Center](https://servicetrust.microsoft.com/).
azure-video-analyzer Develop Deploy Grpc Inference Srv https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/develop-deploy-grpc-inference-srv.md
-
+ Title: Develop and deploy a gRPC inference server - Azure Video Analyzer description: This article provides guidance on how to develop and deploy a gRPC inference server to be used with Azure Video Analyzer.
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/azure-security-integration.md
The diagram shows the integrated monitoring architecture of integrated security
2. Under Resources, select **Servers** and then **+Add**.
- :::image type="content" source="media/azure-security-integration/add-server-to-azure-arc.png" alt-text="A screenshot showing Azure Arc Servers page for adding an Azure VMware Solution VM to Azure.":::
+ :::image type="content" source="media/azure-security-integration/add-server-to-azure-arc.png" alt-text="Screenshot showing Azure Arc Servers page for adding an Azure VMware Solution VM to Azure.":::
3. Select **Generate script**.
- :::image type="content" source="media/azure-security-integration/add-server-using-script.png" alt-text="A screenshot of Azure Arc page showing option for adding a server using interactive script.":::
+ :::image type="content" source="media/azure-security-integration/add-server-using-script.png" alt-text="Screenshot of Azure Arc page showing option for adding a server using interactive script.":::
4. On the **Prerequisites** tab, select **Next**.
This provides you with the security health details of your resource.
2. For Resource type, select **Servers - Azure Arc**.
- :::image type="content" source="media/azure-security-integration/select-resource-in-security-center.png" alt-text="A screenshot of the Azure Security Center Inventory page showing Servers - Azure Arc selected under Resource type.":::
+ :::image type="content" source="media/azure-security-integration/select-resource-in-security-center.png" alt-text="Screenshot showing the Azure Security Center Inventory page with the Servers - Azure Arc selected under Resource type.":::
3. Select the name of your resource. A page opens showing the security health details of your resource. 4. Under **Recommendation list**, select the **Recommendations**, **Passed assessments**, and **Unavailable assessments** tabs to view these details.
- :::image type="content" source="media/azure-security-integration/view-recommendations-assessments.png" alt-text="A screenshot of Azure Security Center showing security recommendations and assessments.":::
+ :::image type="content" source="media/azure-security-integration/view-recommendations-assessments.png" alt-text="Screenshot showing the Azure Security Center security recommendations and assessments.":::
## Deploy an Azure Sentinel workspace
Azure Sentinel is built on top of a Log Analytics workspace, so you'll just need
:::image type="content" source="media/azure-security-integration/select-events-you-want-to-stream.png" alt-text="Screenshot of Security Events page in Azure Sentinel where you can select which events to stream."::: +++ ## Connect Azure Sentinel with Azure Security Center 1. On the Azure Sentinel workspace page, select the configured workspace.
Azure Sentinel is built on top of a Log Analytics workspace, so you'll just need
3. Select **Azure Security Center** from the list and then select **Open connector page**.
- :::image type="content" source="media/azure-security-integration/connect-security-center-with-azure-sentinel.png" alt-text="Screenshot of Data connectors page in Azure Sentinel showing selection to connect Azure Security Center with Azure Sentinel.":::
+ :::image type="content" source="media/azure-security-integration/connect-security-center-with-azure-sentinel.png" alt-text="Screenshot of Data connectors page in Azure Sentinel showing selection to connect Azure Security Center with Azure Sentinel.":::
4. Select **Connect** to connect the Azure Security Center with Azure Sentinel.
You can create queries or use the available pre-defined query in Azure Sentinel
1. On the Azure Sentinel overview page, under Threat management, select **Hunting**. A list of pre-defined queries is displayed. >[!TIP]
- >You can also create a new query by selecting **+New Query**.
+ >You can also create a new query by selecting **New Query**.
> >:::image type="content" source="media/azure-security-integration/create-new-query.png" alt-text="Screenshot of Azure Sentinel Hunting page with + New Query highlighted.":::
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 05/13/2021 Last updated : 06/28/2021 # Azure VMware Solution networking and interconnectivity concepts
The diagram below shows the basic network interconnectivity established at the t
- Inbound access of workloads running in the private cloud. ## On-premises interconnectivity
The diagram below shows the on-premises to private cloud interconnectivity, whic
- Hot/Cold vCenter vMotion between on-premises and Azure VMware Solution. - On-Premises to Azure VMware Solution private cloud management access. For full interconnectivity to your private cloud, you need to enable ExpressRoute Global Reach and then request an authorization key and private peering ID for Global Reach in the Azure portal. The authorization key and peering ID are used to establish Global Reach between an ExpressRoute circuit in your subscription and the ExpressRoute circuit for your private cloud. Once linked, the two ExpressRoute circuits route network traffic between your on-premises environments to your private cloud. For more information on the procedures, see the [tutorial for creating an ExpressRoute Global Reach peering to a private cloud](tutorial-expressroute-global-reach-private-cloud.md).
azure-vmware Configure Alerts For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-alerts-for-azure-vmware-solution.md
The following metrics are visible through Azure Monitor Metrics.
- Set up the Action Group - Define the Alert rule details
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/create-alert-rule-details.png" alt-text="Screenshot that shows the Create alert rule window." lightbox="media/configure-alerts-for-azure-vmware-solution/create-alert-rule-details.png":::
+ :::image type="content" source="../devtest-labs/media/activity-logs/create-alert-rule-done.png" alt-text="Screenshot showing the Create alert rule window." lightbox="../devtest-labs/media/activity-logs/create-alert-rule-done.png":::
1. Under **Scope**, select the target resource you want to monitor. By default, the Azure VMware Solution private cloud from where you opened the Alerts menu has been defined.
The following metrics are visible through Azure Monitor Metrics.
In our example, we've selected **Percentage Datastore Disk Used**, which is relevant from an [Azure VMware Solution SLA](https://aka.ms/avs/sla) perspective.
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/configure-signal-logic-options.png" alt-text="Screenshot that shows the Configure signal logic window with predefined signal names.":::
+ :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/configure-signal-logic-options.png" alt-text="Screenshot showing the Configure signal logic window with signals to create for the alert rule.":::
1. Define the logic that will trigger the alert and then select **Done**. In our example, only the **Threshold** and **Frequency of evaluation** have been adjusted.
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/define-alert-logic-threshold.png" alt-text="Screenshot that shows the information for the selected signal logic.":::
+ :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/define-alert-logic-threshold.png" alt-text="Screenshot showing the threshold, operator, aggregation type and granularity, threshold value, and frequency of evaluation for the signal alert logic.":::
1. Under **Actions**, select **Add action groups**. The action group defines *how* the notification is received and *who* receives it. You can receive notifications by email, SMS, [Azure Mobile App Push Notification](https://azure.microsoft.com/features/azure-portal/mobile-app/) or voice message.
-
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/create-action-group.png" alt-text="Screenshot that shows the existing action groups and where to create a new action group.":::
-
-1. In the window that opens, select **Create action group**.
-
- >[!TIP]
- > You can also use an existing action group.
-
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/select-action-group-for-alert-rule.png" alt-text="Screenshot that shows the action groups to select for the alert.":::
-
-
+1. Select an existing action group or select **Create action group** to create a new one.
1. In the window that opens, on the **Basics** tab, give the action group a name and a display name.
The following metrics are visible through Azure Monitor Metrics.
Our example is based on email notification.
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/create-action-group-notification-settings.png" alt-text="Screenshot that shows the email, SMS message, push, and voice settings for the alert." lightbox="media/configure-alerts-for-azure-vmware-solution/create-action-group-notification-settings.png":::
+ :::image type="content" source="../azure-monitor/alerts/media/action-groups/action-group-2-notifications.png" alt-text="Screenshot showing email, SMS message, push, and voice settings for the alert." lightbox="../azure-monitor/alerts/media/action-groups/action-group-2-notifications.png":::
1. (Optional) Configure the **Actions** if you want to take proactive actions and receive notification on the event. Select an available **Action type** and then select **Review + create**.
- - Automation Runbooks
- - Azure Functions ΓÇô for custom event-driven serverless code execution
- - ITSM ΓÇô to integrate with a service provider like ServiceNow to create a ticket
- - Logic App - for more complex workflow orchestration
- - Webhooks - to trigger a process in another service
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/create-action-group-action-type.png" alt-text="Screenshot that shows the Create action group window with a focus on the Action type drop-down." lightbox="media/configure-alerts-for-azure-vmware-solution/create-action-group-action-type.png":::
+ - **Automation Runbooks** - to automate tasks based on alerts
+
+ - **Azure Functions** ΓÇô for custom event-driven serverless code execution
+
+ - **ITSM** ΓÇô to integrate with a service provider like ServiceNow to create a ticket
+
+ - **Logic App** - for more complex workflow orchestration
+
+ - **Webhooks** - to trigger a process in another service
+ 1. Under the **Alert rule details**, provide a name, description, resource group to store the alert rule, the severity. Then select **Create alert rule**.
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/alert-rule-details.png" alt-text="Screenshot that shows the details for the alert rule.":::
-
The alert rule is visible and can be managed from the Azure portal.
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/existing-alert-rule.png" alt-text="Screenshot that shows the new alert rule in the Rules window." lightbox="media/configure-alerts-for-azure-vmware-solution/existing-alert-rule.png":::
+ :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/existing-alert-rule.png" alt-text="Screenshot showing the new alert rule in the Rules window." lightbox="media/configure-alerts-for-azure-vmware-solution/existing-alert-rule.png":::
As soon as a metric reaches the threshold as defined in an alert rule, the **Alerts** menu is updated and made visible.
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/threshold-alert.png" alt-text="Screenshot that shows the alert after reaching the threshold defined." lightbox="media/configure-alerts-for-azure-vmware-solution/threshold-alert.png":::
+ :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/threshold-alert.png" alt-text="Screenshot showing the alert after reaching the threshold defined in the alert rule." lightbox="media/configure-alerts-for-azure-vmware-solution/threshold-alert.png":::
Depending on the configured Action Group, you'll receive a notification through the configured medium. In our example, we've configured email.
The following metrics are visible through Azure Monitor Metrics.
1. From your Azure VMware Solution private cloud, select **Monitoring** > **Metrics**. Then select the metric you want from the drop-down.
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/monitoring-metrics.png" alt-text="Screenshot that shows the Metrics window and a focus on the Metric drop-down." lightbox="media/configure-alerts-for-azure-vmware-solution/monitoring-metrics.png":::
+ :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/monitoring-metrics.png" alt-text="Screenshot showing the Metrics window and a focus on the Metric drop-down." lightbox="media/configure-alerts-for-azure-vmware-solution/monitoring-metrics.png":::
1. You can change the diagram's parameters, such as the **Time range** or the **Time granularity**.
The following metrics are visible through Azure Monitor Metrics.
- **Drill into Logs** and query the data in the related Log Analytics workspace - **Pin this diagram** to an Azure Dashboard for convenience.
- :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/monitoring-metrics-time-range-granularity.png" alt-text="Screenshot that shows the time range and time granularity options for metric." lightbox="media/configure-alerts-for-azure-vmware-solution/monitoring-metrics-time-range-granularity.png":::
+ :::image type="content" source="media/configure-alerts-for-azure-vmware-solution/monitoring-metrics-time-range-granularity.png" alt-text="Screenshot showing the time range and time granularity options for metric." lightbox="media/configure-alerts-for-azure-vmware-solution/monitoring-metrics-time-range-granularity.png":::
## Next steps
Now that you've configured an alert rule for your Azure VMware Solution private
- [Azure Monitor Alerts](../azure-monitor/alerts/alerts-overview.md) - [Azure Action Groups](../azure-monitor/alerts/action-groups.md)
-You can also continue with one of the other [Azure VMware Solution](index.yml) how-to guides.
+You can also continue with one of the other [Azure VMware Solution](index.yml) how-to guides.
azure-vmware Configure Nsx Network Components Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-nsx-network-components-azure-portal.md
You can create and configure an NSX-T segment from the Azure VMware Solution con
2. Provide the details for the new logical segment and select **OK**.
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/add-new-nsxt-segment.png" alt-text="Screenshot showing how to add a new segment.":::
+ :::image type="content" source="media/configure-nsx-network-components-azure-portal/add-new-nsxt-segment.png" alt-text="Screenshot showing how to add a new NSX-T segment in the Azure portal.":::
- **Segment name** - Name of the logical switch that is visible in vCenter.
To set up port mirroring in the Azure VMware Solution console, you'll:
1. Repeat these steps to create the destination VM group. >[!NOTE]
- >Before creating a port mirroring profile, make sure you have both the source and destination VM groups created.
+ >Before creating a port mirroring profile, make sure that you've created both the source and destination VM groups.
-1. Select **Port mirroring** > **Add** and then provide:
+1. Select **Port mirroring** > **Port mirroring** > **Add** and then provide:
:::image type="content" source="media/configure-nsx-network-components-azure-portal/add-port-mirroring-profile.png" alt-text="Screenshot showing the information required for the port mirroring profile.":::
When a DNS query is received, a DNS forwarder compares the domain name with the
1. In your Azure VMware Solution private cloud, under **Workload Networking**, select **DNS** > **DNS zones** > **Add**.
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-dns-zones.png" alt-text="Screenshot showing how to add DNS zones and a DNS service.":::
+ :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-dns-zones.png" alt-text="Screenshot showing how to add DNS zones to an Azure VMware Solution private cloud.":::
-1. Select **Default DNS zone** and provide:
+1. Select **Default DNS zone** and provide a name and up to three DNS server IP addresses in the format of **8.8.8.8**.
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-dns-zones.png" alt-text="Screenshot showing how to add a default DNS zone.":::
+ :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-dns-zones.png" alt-text="Screenshot showing the required information needed to add a default DNS zone.":::
- 1. A name for the DNS zone.
+1. Select **FQDN zone** and provide a name, the FQDN zone, and up to three DNS server IP addresses in the format of **8.8.8.8**.
- 1. Up to three DNS server IP addresses in the format of **8.8.8.8**.
-
-1. Select **FQDN zone** and provide:
-
- :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-fqdn-zone.png" alt-text="Screenshot showing how to add an FQDN zone. ":::
-
- 1. A name for the DNS zone.
-
- 1. The FQDN domain.
-
- 1. Up to three DNS server IP addresses in the format of **8.8.8.8**.
+ :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-fqdn-zone.png" alt-text="Screenshot showing showing the required information needed to add an FQDN zone.":::
1. Select **OK** to finish adding the default DNS zone and DNS service.
When a DNS query is received, a DNS forwarder compares the domain name with the
The DNS service was added successfully. :::image type="content" source="media/configure-nsx-network-components-azure-portal/nsxt-workload-networking-configure-dns-service-success.png" alt-text="Screenshot showing the DNS service added successfully.":::-
azure-vmware Configure Site To Site Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-site-to-site-vpn-gateway.md
Title: Configure a site-to-site VPN in vWAN for Azure VMware Solution description: Learn how to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel into Azure VMware Solutions. Previously updated : 06/11/2021 Last updated : 06/30/2021 # Configure a site-to-site VPN in vWAN for Azure VMware Solution
-In this article, we'll go through the steps to establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel terminating in the Microsoft Azure Virtual WAN hub. The hub contains the Azure VMware Solution ExpressRoute gateway and the site-to-site VPN gateway. It connects an on-premise VPN device with an Azure VMware Solution endpoint.
+In this article, you'll establish a VPN (IPsec IKEv1 and IKEv2) site-to-site tunnel terminating in the Microsoft Azure Virtual WAN hub. The hub contains the Azure VMware Solution ExpressRoute gateway and the site-to-site VPN gateway. It connects an on-premise VPN device with an Azure VMware Solution endpoint.
:::image type="content" source="media/create-ipsec-tunnel/vpn-s2s-tunnel-architecture.png" alt-text="Diagram showing VPN site-to-site tunnel architecture." border="false":::
-In this how to, you'll:
--- Create an Azure Virtual WAN hub and a VPN gateway with a public IP address attached to it. --- Create an Azure ExpressRoute gateway and establish an Azure VMware Solution endpoint. --- Enable a policy-based VPN on-premises setup. - ## Prerequisites You must have a public-facing IP address terminating on an on-premises VPN device.
-## Step 1. Create an Azure Virtual WAN
+## Create an Azure Virtual WAN
[!INCLUDE [Create a virtual WAN](../../includes/virtual-wan-create-vwan-include.md)]
-## Step 2. Create a Virtual WAN hub and gateway
+## Create a virtual hub
+
+A virtual hub is a virtual network that is created and used by Virtual WAN. It's the core of your Virtual WAN network in a region. It can contain gateways for site-to-site and ExpressRoute.
>[!TIP] >You can also [create a gateway in an existing hub](../virtual-wan/virtual-wan-expressroute-portal.md#existinghub).
-1. Select the Virtual WAN you created in the previous step.
-1. Select **Create virtual hub**, enter the required fields, and then select **Next: Site to site**.
- Enter the subnet using a `/24` (minimum).
+## Create a VPN gateway
- :::image type="content" source="media/create-ipsec-tunnel/create-virtual-hub.png" alt-text="Screenshot showing the Create virtual hub page.":::
-4. Select the **Site-to-site** tab, define the site-to-site gateway by setting the aggregate throughput from the **Gateway scale units** drop-down.
- >[!TIP]
- >The scale units are in pairs for redundancy, each supporting 500 Mbps (one scale unit = 500 Mbps).
-
- :::image type="content" source="../../includes/media/virtual-wan-tutorial-hub-include/site-to-site.png" alt-text="Screenshot showing the Site-to-site details.":::
-
-5. Select the **ExpressRoute** tab, create an ExpressRoute gateway.
-
- :::image type="content" source="../../includes/media/virtual-wan-tutorial-er-hub-include/hub2.png" alt-text="Screenshot of the ExpressRoute settings.":::
-
- >[!TIP]
- >A scale unit value is 2 Gbps.
-
- It takes approximately 30 minutes to create each hub.
-
-## Step 3. Create a site-to-site VPN
+## Create a site-to-site VPN
1. In the Azure portal, select the virtual WAN you created earlier.
You must have a public-facing IP address terminating on an on-premises VPN devic
3. On the **Basics** tab, enter the required fields.
- :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-basics2.png" alt-text="Screenshot of the Basics tab for the new VPN site." lightbox="media/create-ipsec-tunnel/create-vpn-site-basics2.png":::
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-site-include/site-basics.png" alt-text="Screenshot shows Create VPN site page with the Basics tab open." lightbox="../../includes/media/virtual-wan-tutorial-site-include/site-basics.png":::
+
+ * **Region** - Previously referred to as location. It's the location you want to create this site resource in.
+
+ * **Name** - The name by which you want to refer to your on-premises site.
+
+ * **Device vendor** - The name of the VPN device vendor (for example: Citrix, Cisco, Barracuda). Adding the device vendor can help the Azure Team better understand your environment in order to add more optimization possibilities in the future, or to help you troubleshoot.
- 1. Select the **Region** from the list.
+ * **Private address space** - The CIDR IP address space that is located on your on-premises site. Traffic destined for this address space is routed to your local site. The CIDR block is only required if you [BGP](../vpn-gateway/bgp-howto.md) isn't enabled for the site.
+
+ >[!NOTE]
+ >If you edit the address space after creating the site (for example, add an additional address space) it can take 8-10 minutes to update the effective routes while the components are recreated.
- 1. Provide a **Name** for the site-to-site VPN.
- 1. Provide the **Device vendor** of the on-premises VPN device, for example, Cisco.
-
- 1. Provide the **Private address space**. Use the on-premises CIDR block to route all traffic bound for on-premises across the tunnel. The CIDR block is only required if you don't [configure Border Gateway Protocol (BGP) on Azure VPN Gateways](../vpn-gateway/bgp-howto.md)
+1. Select **Links** to add information about the physical links at the branch. If you have a Virtual WAN partner CPE device, check with them to see if this information is exchanged with Azure as a part of the branch information upload set up from their systems.
-1. Select **Next : Links** and complete the required fields. Specifying link and provider names allow you to distinguish between any number of gateways that may eventually be created as part of the hub. [BGP](../vpn-gateway/vpn-gateway-bgp-overview.md) and autonomous system number (ASN) must be unique inside your organization. BGP ensures that both Azure VMware Solution and the on-premises servers advertise their routes across the tunnel. If disabled, the subnets that need to be advertised must be manually maintained. If subnets are missed, HCX fails to form the service mesh.
+ Specifying link and provider names allow you to distinguish between any number of gateways that may eventually be created as part of the hub. [BGP](../vpn-gateway/vpn-gateway-bgp-overview.md) and autonomous system number (ASN) must be unique inside your organization. BGP ensures that both Azure VMware Solution and the on-premises servers advertise their routes across the tunnel. If disabled, the subnets that need to be advertised must be manually maintained. If subnets are missed, HCX fails to form the service mesh.
>[!IMPORTANT] >By default, Azure assigns a private IP address from the GatewaySubnet prefix range automatically as the Azure BGP IP address on the Azure VPN gateway. The custom Azure APIPA BGP address is needed when your on premises VPN devices use an APIPA address (169.254.0.1 to 169.254.255.254) as the BGP IP. Azure VPN Gateway will choose the custom APIPA address if the corresponding local network gateway resource (on-premises network) has an APIPA address as the BGP peer IP. If the local network gateway uses a regular IP address (not APIPA), Azure VPN Gateway will revert to the private IP address from the GatewaySubnet range.
- :::image type="content" source="media/create-ipsec-tunnel/create-vpn-site-links.png" alt-text="Screenshot that shows link details." lightbox="media/create-ipsec-tunnel/create-vpn-site-links.png":::
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-site-include/site-links.png" alt-text="Screenshot showing the Create VPN site page with the Links tab open." lightbox="../../includes/media/virtual-wan-tutorial-site-include/site-links.png":::
1. Select **Review + create**. 1. Navigate to the virtual hub that you want, and deselect **Hub association** to connect your VPN site to the hub.
- :::image type="content" source="../../includes/media/virtual-wan-tutorial-site-include/connect.png" alt-text="Screenshot that shows the Connected Sites pane for Virtual HUB ready for Pre-shared key and associated settings.":::
+ :::image type="content" source="../../includes/media/virtual-wan-tutorial-site-include/connect.png" alt-text="Screenshot shows Connect to this hub." lightbox="../../includes/media/virtual-wan-tutorial-site-include/connect.png":::
-## Step 4. (Optional) Create policy-based VPN site-to-site tunnels
+## (Optional) Create policy-based VPN site-to-site tunnels
>[!IMPORTANT] >This is an optional step and applies only to policy-based VPNs.
-Policy-based VPN setups require on-premise and Azure VMware Solution networks to be specified, including the hub ranges. These hub ranges specify the encryption domain of the policy-based VPN tunnel on-premise endpoint. The Azure VMware Solution side only requires the policy-based traffic selector indicator to be enabled.
+[Policy-based VPN setups](../virtual-wan/virtual-wan-custom-ipsec-portal.md) require on-premise and Azure VMware Solution networks to be specified, including the hub ranges. These ranges specify the encryption domain of the policy-based VPN tunnel on-premise endpoint. The Azure VMware Solution side only requires the policy-based traffic selector indicator to be enabled.
-1. In the Azure portal, go to your Virtual WAN hub site. Under **Connectivity**, select **VPN (Site to site)**.
+1. In the Azure portal, go to your Virtual WAN hub site and, under **Connectivity**, select **VPN (Site to site)**.
-2. Select your VPN site name, the ellipsis (...) at the far right, and then **edit VPN connection to this hub**.
-
- :::image type="content" source="media/create-ipsec-tunnel/edit-vpn-section-to-this-hub.png" alt-text="Screenshot of the page in Azure for the Virtual WAN hub site showing an ellipsis selected to access Edit VPN connection to this hub." lightbox="media/create-ipsec-tunnel/edit-vpn-section-to-this-hub.png":::
+2. Select the VPN Site for which you want to set up a custom IPsec policy.
+
+ :::image type="content" source="../virtual-wan/media/virtual-wan-custom-ipsec-portal/locate.png" alt-text="Screenshot showing the existing VPN sites to set up customer IPsec policies." lightbox="../virtual-wan/media/virtual-wan-custom-ipsec-portal/locate.png":::
-3. Edit the connection between the VPN site and the hub, and then select **Save**.
+3. Select your VPN site name, select **More** (...) at the far right, and then select **Edit VPN Connection**.
+
+ :::image type="content" source="../virtual-wan/media/virtual-wan-custom-ipsec-portal/contextmenu.png" alt-text="Screenshot showing the context menu for an existing VPN site." lightbox="../virtual-wan/media/virtual-wan-custom-ipsec-portal/contextmenu.png":::
- Internet Protocol Security (IPSec), select **Custom**. - Use policy-based traffic selector, select **Enable** - Specify the details for **IKE Phase 1** and **IKE Phase 2(ipsec)**.
-
- :::image type="content" source="media/create-ipsec-tunnel/edit-vpn-connection.png" alt-text="Screenshot of Edit VPN connection page.":::
-
+
+4. Change the IPsec setting from default to custom and customize the IPsec policy. Select **Save** to save your settings.
+
+ :::image type="content" source="../virtual-wan/media/virtual-wan-custom-ipsec-portal/edit.png" alt-text="Screenshot showing the existing VPN sites." lightbox="../virtual-wan/media/virtual-wan-custom-ipsec-portal/edit.png":::
+ Your traffic selectors or subnets that are part of the policy-based encryption domain should be: - Virtual WAN hub `/24`
Policy-based VPN setups require on-premise and Azure VMware Solution networks to
- Connected Azure virtual network (if present)
-## Step 5. Connect your VPN site to the hub
+## Connect your VPN site to the hub
1. Select your VPN site name and then select **Connect VPN sites**.
Policy-based VPN setups require on-premise and Azure VMware Solution networks to
:::image type="content" source="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/status.png" alt-text="Screenshot that shows a site-to-site connection and connectivity status." lightbox="../../includes/media/virtual-wan-tutorial-connect-vpn-site-include/status.png":::
-1. [Download the VPN configuration file](../virtual-wan/virtual-wan-site-to-site-portal.md#device) for the on-premises endpoint.
+ **Connection Status:** This is the status of the Azure resource for the connection that connects the VPN site to the Azure hubΓÇÖs VPN gateway. Once this control plane operation is successful, Azure VPN gateway and the on-premises VPN device will proceed to establish connectivity.
-3. Patch the Azure VMware Solution ExpressRoute in the Virtual WAN hub.
+ **Connectivity Status:** This is the actual connectivity (data path) status between AzureΓÇÖs VPN gateway in the hub and VPN site. It can show any of the following states:
+
+ * **Unknown**: This state is typically seen if the backend systems are working to transition to another status.
+ * **Connecting**: Azure VPN gateway is trying to reach out to the actual on-premises VPN site.
+ * **Connected**: Connectivity is established between Azure VPN gateway and on-premises VPN site.
+ * **Disconnected**: This status is seen if, for any reason (on-premises or in Azure), the connection was disconnected.
+
+1. Download the VPN configuration file and apply it to the on-premises endpoint.
+
+ 1. On the VPN (Site to site) page, near the top, select **Download VPN Config**. Azure creates a storage account in the resource group 'microsoft-network-[location]', where location is the location of the WAN. After you have applied the configuration to your VPN devices, you can delete this storage account.
+
+ 1. Once the configuration file is created, select the link to download it.
+
+ 1. Apply the configuration to your on-premises VPN device.
+
+ For more information about the configuration file, see [About the VPN device configuration file](../virtual-wan/virtual-wan-site-to-site-portal.md#about-the-vpn-device-configuration-file).
+
+1. Patch the Azure VMware Solution ExpressRoute in the Virtual WAN hub.
>[!IMPORTANT] >You must first have a private cloud created before you can patch the platform. [!INCLUDE [request-authorization-key](includes/request-authorization-key.md)]
-4. Link Azure VMware Solution and the VPN gateway together in the Virtual WAN hub. You'll use the authorization key and ExpressRoute ID (peer circuit URI) from the previous step.
+1. Link Azure VMware Solution and the VPN gateway together in the Virtual WAN hub. You'll use the authorization key and ExpressRoute ID (peer circuit URI) from the previous step.
1. Select your ExpressRoute gateway and then select **Redeem authorization key**.
Policy-based VPN setups require on-premise and Azure VMware Solution networks to
1. Select **Add** to establish the link.
-5. Test your connection by [creating an NSX-T segment](./tutorial-nsx-t-network-segment.md) and provisioning a VM on the network. Ping both the on-premise and Azure VMware Solution endpoints.
+1. Test your connection by [creating an NSX-T segment](./tutorial-nsx-t-network-segment.md) and provisioning a VM on the network. Ping both the on-premise and Azure VMware Solution endpoints.
+
+ >[!NOTE]
+ >Wait approximately 5 minutes before you test connectivity from a client behind your ExpressRoute circuit, for example, a VM in the VNet that you created earlier.
azure-vmware Deploy Vm Content Library https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/deploy-vm-content-library.md
Title: Create a content library to deploy VMs in Azure VMware Solution description: Create a content library to deploy a VM in an Azure VMware Solution private cloud. Previously updated : 02/03/2021 Last updated : 06/28/2021 # Create a content library to deploy VMs in Azure VMware Solution A content library stores and manages content in the form of library items. A single library item consists of one or more files you use to deploy virtual machines (VMs).
-In this article, we'll walk through the procedure for creating a content library. Then we'll walk through deploying a VM using an ISO image from the content library.
+In this article, you'll create a content library in the vSphere Client and then deploy a VM using an ISO image from the content library.
## Prerequisites
-An NSX-T segment (logical switch) and a managed DHCP service are required to complete this tutorial. For more information, see the [Configure DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md) article.
+An NSX-T segment and a managed DHCP service are required to complete this tutorial. For more information, see [Configure DHCP for Azure VMware Solution](configure-dhcp-azure-vmware-solution.md).
## Create a content library
-1. From the on-premises vSphere Client, select **Menu > Content Libraries**.
+1. From the on-premises vSphere Client, select **Menu** > **Content Libraries**.
- ![Select Menu -> Content Libraries](./media/content-library/vsphere-menu-content-libraries.png)
+ :::image type="content" source="media/content-library/vsphere-menu-content-libraries.png" alt-text="Screenshot showing the Content Libraries menu option in the vSphere Client.":::
-1. Select the **Add** button to create a new content library.
+1. Select **Add** to create a new content library.
- ![Select the Add button to create a new content library.](./media/content-library/create-new-content-library.png)
+ :::image type="content" source="media/content-library/create-new-content-library.png" alt-text="Screenshot showing how to create a new content library in vSphere.":::
-1. Specify a name and confirm the IP address of the vCenter server and select **Next**.
+1. Provide a name and confirm the IP address of the vCenter server and select **Next**.
- ![Specify a name and notes of your choosing, and then select Next.](./media/content-library/new-content-library-step1.png)
+ :::image type="content" source="media/content-library/new-content-library-step-1.png" alt-text="Screenshot showing the name and vCenter Server IP for the new content library.":::
1. Select the **Local content library** and select **Next**.
- ![For this example, we are going to create a local content library, select Next.](./media/content-library/new-content-library-step2.png)
+ :::image type="content" source="media/content-library/new-content-library-step-2.png" alt-text="Screenshot showing the Local content library option selected for the new content library.":::
1. Select the datastore that will store your content library, and then select **Next**.
- ![Select the datastore you would like to host your content library, select next.](./media/content-library/new-content-library-step3.png)
+ :::image type="content" source="media/content-library/new-content-library-step-3.png" alt-text="Screenshot showing the vsanDatastore storage location selected.":::
-1. Review and verify the content library settings, and then select **Finish**.
+1. Review the content library settings, and select **Finish**.
- ![Verify your Settings, select Finish.](./media/content-library/new-content-library-step4.png)
+ :::image type="content" source="media/content-library/new-content-library-step-4.png" alt-text="Screenshot showing the settings for the new content library.":::
## Upload an ISO image to the content library Now that the content library has been created, you can add an ISO image to deploy a VM to a private cloud cluster.
-1. From the vSphere Client, select **Menu > Content Libraries**.
+1. From the vSphere Client, select **Menu** > **Content Libraries**.
1. Right-click the content library you want to use for the new ISO and select **Import Item**.
Now that the content library has been created, you can add an ISO image to deplo
## Deploy a VM to a private cloud cluster
-1. From the vSphere Client, select **Menu > Hosts and Clusters**.
+1. From the vSphere Client, select **Menu** > **Hosts and Clusters**.
1. In the left panel, expand the tree and select a cluster.
-1. Select **Actions > New Virtual Machine**.
+1. Select **Actions** > **New Virtual Machine**.
1. Go through the wizard and modify the settings you want.
-1. Select **New CD/DVD Drive > Client Device > Content Library ISO File**.
+1. Select **New CD/DVD Drive** > **Client Device** > **Content Library ISO File**.
1. Select the ISO uploaded in the previous section and then select **OK**. 1. Select the **Connect** check box so the ISO is mounted at power-on time.
-1. Select **New Network > Select dropdown > Browse**.
+1. Select **New Network** > **Select dropdown** > **Browse**.
1. Select the **logical switch (segment)** and select **OK**.
Now that the content library has been created, you can add an ISO image to deplo
## Next steps
-Now that you've covered creating a content library to deploy VMs in Azure VMware Solution, you may want to learn about:
+Now that you've created a content library to deploy VMs in Azure VMware Solution, you may want to learn about:
-- [How to migrate VM workloads to your private cloud](tutorial-deploy-vmware-hcx.md)
+- [Migrating VM workloads to your private cloud](tutorial-deploy-vmware-hcx.md)
- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md) <!-- LINKS - external-->
certification Program Requirements Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/certification/program-requirements-edge-secured-core.md
Edge Secured-core is an incremental certification in the Azure Certified Device
|Applies To|Any device| |OS|Agnostic| |Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure storage encryption is enabled and default algorithm is XTS-AES, with key length 128 bits or higher.|
+|Validation|Device to be validated through toolset to ensure storage encryption is enabled and default algorithm is XTS-AES, with key length 128 bits or higher. </br></br>Note: Preview release June 2021 only verifies device has DM-Crypt installed and has an encrypted partition.|
|Resources||
Edge Secured-core is an incremental certification in the Azure Certified Device
|Applies To|Any device| |OS|Agnostic| |Validation Type|Manual/Tools|
-|Validation|Partner confirmation that they were able to send an update to the device through Microsoft update, Azure Device update, or other approved services.|
+|Validation|Partner confirmation that they were able to send an update to the device through Microsoft update, [Device Update for IoT Hub (ADU)](../iot-hub-device-update/understand-device-update.md). For Linux devices using Device Update for IoT Hub, certification will require providing a .swu update file during the Secured Core test process and device specific information for Certification Service to generate a [update manifest](../iot-hub-device-update/update-manifest.md) file.|
|Resources|[Device Update for IoT Hub](../iot-hub-device-update/index.yml)|
Validation|Device to be validated through toolset to ensure the device supports
|Applies To|Any device| |OS|Agnostic| |Validation Type|Manual/Tools|
-|Validation|Device to be validated through toolset to ensure that firmware and kernel signatures are validated every time the device boots. <ul><li>UEFI: Secure boot is enabled</li><li>Uboot: Verified boot is enabled</li></ul>|
+|Validation|Device to be validated through toolset to ensure that firmware and kernel signatures are validated every time the device boots. <ul><li>UEFI: Secure boot is enabled</li><li>Uboot: Verified boot is enabled</li></ul> </br> </br>Note: Preview release June 2021 only verifies UEFI existence.|
|Resources||
cloud-services-extended-support In Place Migration Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-common-errors.md
description: Overview of common errors when migrating from Cloud Services (class
--++ Last updated 2/08/2021
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-overview.md
description: Overview of migration from Cloud Services (classic) to Cloud Servic
--++ Last updated 2/08/2021
cloud-services-extended-support In Place Migration Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-portal.md
description: How to migrate to Cloud Services (extended support) using the Azure
--++ Last updated 2/08/2021
cloud-services-extended-support In Place Migration Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-powershell.md
Title: Migrate to Azure Cloud Services (extended support) using PowerShell description: How to migrate from Azure Cloud Services (classic) to Azure Cloud Services (extended support) using PowerShell- ms.reviwer: mimckitt Last updated 02/06/2020-++
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/in-place-migration-technical-details.md
Title: Technical details and requirements for migrating to Azure Cloud Services (extended support) description: Provides technical details and requirements for migrating from Azure Cloud Services (classic) to Azure Cloud Services (extended support)- ms.reviwer: mimckitt Last updated 02/06/2020-++
cloud-services Applications Dont Support Tls 1 2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/applications-dont-support-tls-1-2.md
Title: Troubleshooting issues caused by applications that don't support TLS 1.2 | Microsoft Docs description: Troubleshooting issues caused by applications that don't support TLS 1.2 -++ tags: top-support-issue Last updated 03/16/2020- # Troubleshooting applications that don't support TLS 1.2
cloud-services Automation Manage Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/automation-manage-cloud-services.md
description: Learn about how the Azure Automation service can be used to manage
Last updated 10/14/2020--++
cloud-services Cloud Services Allocation Failures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-allocation-failures.md
description: Troubleshoot an allocation failure when you deploy Azure Cloud Serv
Last updated 10/14/2020--++
cloud-services Cloud Services Application And Service Availability Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-application-and-service-availability-faq.md
description: This article lists the frequently asked questions about application
Last updated 10/14/2020--++
cloud-services Cloud Services Certs Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-certs-create.md
description: Learn about how to create and deploy certificates for cloud service
Last updated 10/14/2020--++
cloud-services Cloud Services Choose Me https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-choose-me.md
description: Learn about what Azure Cloud Services is, specifically that it's de
Last updated 10/14/2020--++
cloud-services Cloud Services Configuration And Management Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-configuration-and-management-faq.md
Last updated 10/14/2020--++
cloud-services Cloud Services Configure Ssl Certificate Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-configure-ssl-certificate-portal.md
description: Learn how to specify an HTTPS endpoint for a web role and how to up
Last updated 10/14/2020--++
cloud-services Cloud Services Connect To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-connect-to-custom-domain.md
description: Learn how to connect your web/worker roles to a custom AD Domain us
Last updated 10/14/2020--++
cloud-services Cloud Services Connectivity And Networking Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-connectivity-and-networking-faq.md
description: This article lists the frequently asked questions about connectivit
Last updated 10/14/2020--++
cloud-services Cloud Services Custom Domain Name Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-custom-domain-name-portal.md
description: Learn how to expose your Azure application or data to the internet
Last updated 10/14/2020--++
cloud-services Cloud Services Deployment Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-deployment-faq.md
description: This article lists the frequently asked questions about deployment
Last updated 10/14/2020--++
cloud-services Cloud Services Diagnostics Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-diagnostics-powershell.md
description: Learn how to use PowerShell to enable collecting diagnostic data fr
Last updated 10/14/2020--++
cloud-services Cloud Services Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-disaster-recovery-guidance.md
description: Learn what to do in the event of an Azure service disruption that i
Last updated 10/14/2020--++
cloud-services Cloud Services Dotnet Diagnostics Trace Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md
description: Add tracing messages to an Azure application to help debugging, mea
Last updated 10/14/2020--++
cloud-services Cloud Services Dotnet Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-diagnostics.md
description: Using Azure diagnostics to gather data from Azure cloud Services fo
Last updated 10/14/2020--++
cloud-services Cloud Services Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-get-started.md
description: Learn how to create a multi-tier app using ASP.NET MVC and Azure. T
Last updated 10/14/2020--++
cloud-services Cloud Services Dotnet Install Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-dotnet-install-dotnet.md
description: This article describes how to manually install the .NET Framework o
Last updated 10/14/2020--++
cloud-services Cloud Services Enable Communication Role Instances https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-enable-communication-role-instances.md
description: Role instances in Cloud Services can have endpoints (http, https, t
Last updated 10/14/2020--++
cloud-services Cloud Services How To Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-configure-portal.md
Last updated 10/14/2020--++
cloud-services Cloud Services How To Create Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-create-deploy-portal.md
description: Learn how to use the Quick Create method to create a cloud service
Last updated 10/14/2020--++
cloud-services Cloud Services How To Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-manage-portal.md
description: Learn how to manage Cloud Services in the Azure portal. These examp
Last updated 10/14/2020--++
cloud-services Cloud Services How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-monitor.md
description: Describes what monitoring an Azure Cloud Service involves and what
Last updated 10/14/2020--++
cloud-services Cloud Services How To Scale Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-scale-portal.md
Last updated 10/14/2020--++
cloud-services Cloud Services How To Scale Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-scale-powershell.md
Last updated 10/14/2020--++
cloud-services Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-model-and-package.md
Last updated 10/14/2020--++
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
description: Use this tutorial to learn how to host a socket.IO-based chat appli
Last updated 10/14/2020--++
cloud-services Cloud Services Nodejs Develop Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-nodejs-develop-deploy-app.md
description: Learn how to create a simple Node.js web application and deploy it
Last updated 10/14/2020--++
cloud-services Cloud Services Nodejs Develop Deploy Express App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-nodejs-develop-deploy-express-app.md
description: Use this tutorial to create a new application using the Express mod
Last updated 10/14/2020--++
cloud-services Cloud Services Performance Testing Visual Studio Profiler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-performance-testing-visual-studio-profiler.md
description: Investigate performance issues in cloud services with the Visual St
Last updated 10/14/2020--++
cloud-services Cloud Services Powershell Create Cloud Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-powershell-create-cloud-container.md
description: This article explains how to create a cloud service container with
Last updated 10/14/2020--++
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
description: Learn how to programmatically perform common service management tas
Last updated 10/14/2020--++
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-python-ptvs.md
description: Overview of using Python Tools for Visual Studio to create Azure cl
Last updated 10/14/2020--++
cloud-services Cloud Services Role Config Xpath https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-config-xpath.md
Last updated 10/14/2020--++
cloud-services Cloud Services Role Enable Remote Desktop New Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-enable-remote-desktop-new-portal.md
description: How to configure your azure cloud service application to allow remo
Last updated 10/14/2020--++
cloud-services Cloud Services Role Enable Remote Desktop Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-enable-remote-desktop-powershell.md
description: How to configure your azure cloud service application using PowerSh
Last updated 10/14/2020--++
cloud-services Cloud Services Role Enable Remote Desktop Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-enable-remote-desktop-visual-studio.md
description: How to configure your Azure cloud service application to allow remo
Last updated 10/14/2020--++
cloud-services Cloud Services Role Lifecycle Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-role-lifecycle-dotnet.md
description: Learn how to use the lifecycle methods of a Cloud Service role in .
Last updated 10/14/2020--++
cloud-services Cloud Services Sizes Specs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-sizes-specs.md
Last updated 10/14/2020--++
cloud-services Cloud Services Startup Tasks Common https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-startup-tasks-common.md
description: Provides some examples of common startup tasks you may want to perf
Last updated 10/14/2020--++
cloud-services Cloud Services Startup Tasks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-startup-tasks.md
description: Startup tasks help prepare your cloud service environment for your
Last updated 10/14/2020--++
cloud-services Cloud Services Troubleshoot Common Issues Which Cause Roles Recycle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-common-issues-which-cause-roles-recycle.md
description: A cloud service role that suddenly recycles can cause significant d
Last updated 10/14/2020--++
cloud-services Cloud Services Troubleshoot Constrained Allocation Failed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-constrained-allocation-failed.md
Title: Troubleshoot ConstrainedAllocationFailed when deploying a Cloud service (classic) to Azure | Microsoft Docs description: This article shows how to resolve a ConstrainedAllocationFailed exception when deploying a Cloud service (classic) to Azure. --++ Last updated 02/22/2021
cloud-services Cloud Services Troubleshoot Default Temp Folder Size Too Small Web Worker Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-default-temp-folder-size-too-small-web-worker-role.md
description: A cloud service role has a limited amount of space for the TEMP fol
Last updated 10/14/2020--++
cloud-services Cloud Services Troubleshoot Deployment Problems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-deployment-problems.md
description: There are a few common problems you may run into when deploying a c
Last updated 10/14/2020--++
cloud-services Cloud Services Troubleshoot Fabric Internal Server Error https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-fabric-internal-server-error.md
Title: Troubleshoot FabricInternalServerError or ServiceAllocationFailure when deploying a Cloud service (classic) to Azure | Microsoft Docs description: This article shows how to resolve a FabricInternalServerError or ServiceAllocationFailure exception when deploying a Cloud service (classic) to Azure. --++ Last updated 02/22/2021
cloud-services Cloud Services Troubleshoot Location Not Found For Role Size https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-location-not-found-for-role-size.md
Title: Troubleshoot LocationNotFoundForRoleSize when deploying a Cloud service (classic) to Azure | Microsoft Docs description: This article shows how to resolve a LocationNotFoundForRoleSize exception when deploying a Cloud service (classic) to Azure. --++ Last updated 02/22/2021
cloud-services Cloud Services Troubleshoot Overconstrained Allocation Request https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-overconstrained-allocation-request.md
Title: Troubleshoot OverconstrainedAllocationRequest when deploying a Cloud serv
description: This article shows how to resolve an OverconstrainedAllocationRequest exception when deploying a Cloud service (classic) to Azure. documentationcenter: ''--++ Last updated 02/22/2021
cloud-services Cloud Services Troubleshoot Roles That Fail Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-troubleshoot-roles-that-fail-start.md
description: Here are some common reasons why a Cloud Service role may fail to s
Last updated 10/14/2020--++
cloud-services Cloud Services Update Azure Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-update-azure-service.md
description: Learn how to update cloud services in Azure. Learn how an update on
Last updated 10/14/2020--++
cloud-services Cloud Services Workflow Process https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-workflow-process.md
description: This article provides overview of the workflow processes when you d
Last updated 10/14/2020--++
cloud-services Diagnostics Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/diagnostics-performance-counters.md
description: Learn how to discover, use, and create performance counters in Clou
Last updated 10/14/2020--++
cloud-services Resource Health For Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/resource-health-for-cloud-services.md
Last updated 10/14/2020--++
cloud-services Schema Cscfg File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-file.md
Last updated 10/14/2020--++
cloud-services Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-networkconfiguration.md
Last updated 10/14/2020--++ thor: tagore
cloud-services Schema Cscfg Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-role.md
Last updated 10/14/2020---+++
cloud-services Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-file.md
Last updated 10/14/2020--++
cloud-services Schema Csdef Loadbalancerprobe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-loadbalancerprobe.md
Last updated 10/14/2020--++
cloud-services Schema Csdef Networktrafficrules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-networktrafficrules.md
Last updated 10/14/2020--++
cloud-services Schema Csdef Webrole https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-webrole.md
Last updated 10/14/2020--++
cloud-services Schema Csdef Workerrole https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-workerrole.md
Last updated 10/14/2020--++
cognitive-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Anomaly-Detector/concepts/troubleshoot.md
keywords: anomaly detection, machine learning, algorithms
# Troubleshooting the multivariate API
-This article provides guidance on how to troubleshoot and remediate common HTTP error messages when using the multivariate API.
+This article provides guidance on how to troubleshoot and remediate common error messages when using the multivariate API.
### Multivariate error codes
-| Method | HTTP error code | Error message | Action to take |
-|-|--|--||
-| Train a Multivariate Anomaly Detection Model | 400 | `The 'source' field is required in the request.` | The key word "source" has not been specified correctly. The format should be `"{\"source\": \|" <SAS URL>\"}"` |
-| Train a Multivariate Anomaly Detection Model | 400 | `The source field must be a valid sas blob url` | The source field must be a valid blob container sas url. |
-| Train a Multivariate Anomaly Detection Model | 400 | `The 'startTime' field is required in the request.` | Add startTime in the request |
-| Train a Multivariate Anomaly Detection Model | 400 | `The 'endTime' field is required in the request.` | Add endTime in the request. |
-| Train a Multivariate Anomaly Detection Model | 400 | `Invalid Timestamp format.` | The timestamp in the csv file zipped in the source url is in invalid format or "startTime", "endTime" is in invalid format. |
-| Train a Multivariate Anomaly Detection Model | 400 | `The displayName length exceeds maximum allowed length 24.` | DisplayName is an optional parameter to be used for users to distinguish different models. A valid displayName must be smaller than 24 characters. |
-| Train a Multivariate Anomaly Detection Model | 400 | `The 'slidingWindow' field must be an integer between 28 and 2880.` | Sliding window must be in a valid range. |
-| Train a Multivariate Anomaly Detection Model | 401 | `Unable to download blobs on the Azure Blob storage account.` | The URL does not have the right permissions. The list flag is not set. The customer should re-create the SAS URL and make sure the read and list flags is checked (for example using Storage Explorer) |
-| Train a Multivariate Anomaly Detection Model | 413 | `Unable to process the dataset. Number of variables exceed the limit (300).` | The data in the blob container exceeds the limit of currently 300 variables. The customer has to point to reduce the variable size. |
-| Train a Multivariate Anomaly Detection Model | 413 | `Valid Timestamps in the dataset exceeds the limit (1 million points), please change startTime or endTime parameters.` | The max number of points can be used for training 1 million. Customers can reduce variable size or change startTime or endTime |
-| Train a Multivariate Anomaly Detection Model | 413 | `Unable to process dataset. Size of dataset exceeds size limit (2GB).` | The data in the blob container exceeds the limit of currently 4 MB. The customer has to point to a blob with smaller data. |
-| Detect Multivariate Anomaly | 404 | `The model does not exist.` | The model ID is invalid. Customers need to train a model before using it. |
-| Detect Multivariate Anomaly | 400 | `The model is not ready yet.` | The model is not ready yet. Customers need to call Get Multivariate Model api to check model status. |
-| Detect Multivariate Anomaly | 400 | `The 'source' field is required in the request.` | The key word "source" has not been specified correctly. The format should be `"{\"source\": \|" <SAS URL>\"}"` |
-| Detect Multivariate Anomaly | 400 | `The source field must be a valid sas blob url` | The source field must be a valid blob container sas url. |
-| Detect Multivariate Anomaly | 400 | `The 'startTime' field is required in the request.` | Add startTime in the request |
-| Detect Multivariate Anomaly | 400 | `The 'endTime' field is required in the request.` | Add endTime in the request. |
-| Detect Multivariate Anomaly | 400 | `Invalid Timestamp format.` | The timestamp in the csv file zipped in the source url is in an invalid format or "startTime", "endTime" is in invalid format. |
-| Detect Multivariate Anomaly | 400 | `The corresponding file of the variable does not exist.` | One variable has been used in train, but it cannot be found when the customer uses the corresponding model to do detection. Customers need to add this variable and then submit the detection request. |
-| Detect Multivariate Anomaly | 413 | `Unable to process the dataset. Number of variables exceed the limit (300).` | The data in the blob container exceeds the limit of currently 300 variables. The customer has to point to reduce the variable size. |
-| Detect Multivariate Anomaly | 413 | `The limit timestamps of one detection request is 2880, please change startTime or endTime parameters.` | The max timestamps to be detected in one detection request is 2880, customers need to change the startTime or endTime and then submit detection request. |
-| Detect Multivariate Anomaly | 413 | `Unable to process dataset. Size of dataset exceeds size limit (2GB).` | The data in the blob container exceeds the limit of currently 4 MB. The customer has to point to a blob with smaller data. |
-| Get Multivariate Model | 404 | `Model with 'id=<input model ID>' not found.` | The ID is not a valid model ID. Use GET models to find all valid model Ids. |
-| Get Multivariate Model | 404 | `Model with id=<input model ID>' not found.` | The ID is not a valid model ID. Use GET models to find all valid model Ids. |
-| Get Multivariate Anomaly Detection Result | 404 | `Result with 'id=<input result ID>' not found.` | The ID is not a valid result ID. Resubmit your detection request. |
-| Delete Multivariate Model | 404 | `Location for model with 'id=<input model ID>' not found.` | The ID is not a valid model ID. Use GET models to find all valid model Ids. |
+#### Common Errors
+
+| Error Code | HTTP Error Code | Error Message | Comment |
+| -- | | - | |
+| `SubscriptionNotInHeaders` | 400 | apim-subscription-id is not found in headers | Please add your APIM subscription ID in the header. Example header: `{"apim-subscription-id": <Your Subscription ID>}` |
+| `FileNotExist` | 400 | File <source> does not exist. | Please check the validity of your blob shared access signature (SAS). Make sure that it has not expired. |
+| `InvalidBlobURL` | 400 | | Your blob shared access signature (SAS) is not a valid SAS. |
+| `StorageWriteError` | 403 | | This error is possibly caused by permission issues. Our service is not allowed to write the data to the blob encrypted by a Customer Managed Key (CMK). Either remove CMK or grant access to our service again. Please refer to [this page](/azure/cognitive-services/encryption/cognitive-services-encryption-keys-portal) for more details. |
+| `StorageReadError` | 403 | | Same as `StorageWriteError`. |
+| `UnexpectedError` | 500 | | Please contact us with detailed error information. You could take the support options from [this document](/azure/cognitive-services/cognitive-services-support-options?context=/azure/cognitive-services/anomaly-detector/context/context) or email us at [AnomalyDetector@microsoft.com](mailto:AnomalyDetector@microsoft.com) |
+
+#### Train a Multivariate Anomaly Detection Model
+
+| Error Code | HTTP Error Code | Error Message | Comment |
+| | | | |
+| `TooManyModels` | 400 | This subscription has reached the maximum number of models. | Each APIM subscription ID is allowed to have 300 active models. Please delete unused models before training a new model |
+| `TooManyRunningModels` | 400 | This subscription has reached the maximum number of running models. | Each APIM subscription ID is allowed to train 5 models concurrently. Please train a new model after previous models have completed their training process. |
+| `InvalidJsonFormat` | 400 | Invalid json format. | Training request is not a valid JSON. |
+| `InvalidAlignMode` | 400 | The `'alignMode'` field must be one of the following: `'Inner'` or `'Outer'` . | Please check the value of `'alignMode'` which should be either `'Inner'` or `'Outer'` (case sensitive). |
+| `InvalidFillNAMethod` | 400 | The `'fillNAMethod'` field must be one of the following: `'Previous'`, `'Subsequent'`, `'Linear'`, `'Zero'`, `'Fixed'`, `'NotFill'` and it cannot be `'NotFill'` when `'alignMode'` is `'Outer'`. | Please check the value of `'fillNAMethod'`. You may refer to [this section](/azure/cognitive-services/anomaly-detector/concepts/best-practices-multivariate#fill-not-available-na) for more details. |
+| `RequiredPaddingValue` | 400 | The `'paddingValue'` field is required in the request when `'fillNAMethod'` is `'Fixed'`. | You need to provide a valid padding value when `'fillNAMethod'` is `'Fixed'`. You may refer to [this section](/azure/cognitive-services/anomaly-detector/concepts/best-practices-multivariate#fill-not-available-na) for more details. |
+| `RequiredSource` | 400 | The `'source'` field is required in the request. | Your training request has not specified a value for the `'source'` field. Example: `{"source": <Your Blob SAS>}`. |
+| `RequiredStartTime` | 400 | The `'startTime'` field is required in the request. | Your training request has not specified a value for the `'startTime'` field. Example: `{"startTime": "2021-01-01T00:00:00Z"}`. |
+| `InvalidTimestampFormat` | 400 | Invalid Timestamp format. `<timestamp>` is not a valid format. | The format of timestamp in the request body is not correct. You may try `import pandas as pd; pd.to_datetime(timestamp)` to verify. |
+| `RequiredEndTime` | 400 | The `'endTime'` field is required in the request. | Your training request has not specified a value for the `'startTime'` field. Example: `{"endTime": "2021-01-01T00:00:00Z"}`. |
+| `InvalidSlidingWindow` | 400 | The `'slidingWindow'` field must be an integer between 28 and 2880. | `'slidingWindow'` must be an integer between 28 and 2880 (inclusive). |
+
+#### Get Multivariate Model with Model ID
+
+| Error Code | HTTP Error Code | Error Message | Comment |
+| | | - | |
+| `ModelNotExist` | 404 | The model does not exist. | The model with corresponding model ID does not exist. Please check the model ID in the request URL. |
+
+#### Anomaly Detection with a Trained Model
+
+| Error Code | HTTP Error Code | Error Message | Comment |
+| -- | | | |
+| `ModelNotExist` | 404 | The model does not exist. | The model used for inference does not exist. Please check the model ID in the request URL. |
+| `ModelFailed` | 400 | Model failed to be trained. | The model is not successfully trained. Please get detailed information by getting the model with model ID. |
+| `ModelNotReady` | 400 | The model is not ready yet. | The model is not ready yet. Please wait for a while until the training process completes. |
+| `InvalidFileSize` | 413 | File <file> exceeds the file size limit (<size limit> bytes). | The size of inference data exceeds the upper limit (2GB currently). Please use less data for inference. |
+
+#### Get Detection Results
+
+| Error Code | HTTP Error Code | Error Message | Comment |
+| - | | -- | |
+| `ResultNotExist` | 404 | The result does not exist. | The result per request does not exist. Either inference has not completed or result has expired (7 days). |
+
+#### Data Processing Errors
+The following error codes do not have associated HTTP Error codes.
+
+| Error Code | Error Message | Comment |
+| | | |
+| `NoVariablesFound` | No variables found. Please check that your files are organized as per instruction. | No csv files could be found from the data source. This is typically caused by wrong organization of files. Please refer to the sample data for the desired structure. |
+| `DuplicatedVariables` | There are multiple variables with the same name. | There are duplicated variable names. |
+| `FileNotExist` | File <filename> does not exist. | This error usually happens during inference. The variable has appeared in the training data but is missing in the inference data. |
+| `RedundantFile` | File <filename> is redundant. | This error usually happens during inference. The variable was not in the training data but appeared in the inference data. |
+| `FileSizeTooLarge` | The size of file <filename> is too large. | The size of the single csv file <filename> exceeds the limit. Please train with less data. |
+| `ReadingFileError` | Errors occurred when reading <filename>. <error messages> | Failed to read the file <filename>. You may refer to <error messages> for more details or verify with `pd.read_csv(filename)` in a local environment. |
+| `FileColumnsNotExist` | Columns timestamp or value in file <filename> do not exist. | Each csv file must have two columns with names **timestamp** and **value** (case sensitive). |
+| `VariableParseError` | Variable <variable> parse <error message> error. | Cannot process the <variable> due to runtime errors. Please refer to the <error message> for more details or contact us with the <error message>. |
+| `MergeDataFailed` | Failed to merge data. Please check data format. | Data merge failed. This is possibly due to wrong data format, organization of files, etc. Please refer to the sample data for the current file structure. |
+| `ColumnNotFound` | Column <column> cannot be found in the merged data. | A column is missing after merge. Please verify the data. |
+| `NumColumnsMismatch` | Number of columns of merged data does not match the number of variables. | Please verify the data. |
+| `TooManyData` | Too many data points. Maximum number is 1000000 per variable. | Please reduce the size of input data. |
+| `NoData` | There is no effective data | There is no data to train/inference after processing. Please check the start time and end time. |
+| `DataExceedsLimit` | The length of data whose timestamp is between `startTime` and `endTime` exceeds limit(<limit>). | The size of data after processing exceeds the limit. (Currently no limit on processed data.) |
+| `NotEnoughInput` | Not enough data. The length of data is <data length>, but the minimum length should be larger than sliding window which is <sliding window size>. | The minimum number of data points for inference is the size of sliding window. Try to provide more data for inference. |
cognitive-services Faq Stt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/faq-stt.md
- Title: Speech to Text frequently asked questions-
-description: Get answers to frequently asked questions about the Speech to Text service.
------ Previously updated : 02/01/2021---
-# Speech to Text frequently asked questions
-
-If you can't find answers to your questions in this FAQ, check out [other support options](../cognitive-services-support-options.md?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext%253fcontext%253d%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
-
-## General
-
-**Q: What is the difference between a baseline model and a custom Speech to Text model?**
-
-**A**: A baseline model has been trained by using Microsoft-owned data and is already deployed in the cloud. You can use a custom model to adapt a model to better fit a specific environment that has specific ambient noise or language. Factory floors, cars, or noisy streets would require an adapted acoustic model. Topics like biology, physics, radiology, product names, and custom acronyms would require an adapted language model. If you train a custom model, you should start with related text to improve the recognition of special terms and phrases.
-
-**Q: Where do I start if I want to use a baseline model?**
-
-**A**: First, get a [subscription key](overview.md#try-the-speech-service-for-free). If you want to make REST calls to the predeployed baseline models, see the [REST APIs](./overview.md#reference-docs). If you want to use WebSockets, [download the SDK](speech-sdk.md).
-
-**Q: Do I always need to build a custom speech model?**
-
-**A**: No. If your application uses generic, day-to-day language, you don't need to customize a model. If your application is used in an environment where there's little or no background noise, you don't need to customize a model.
-
-You can deploy baseline and customized models in the portal and then run accuracy tests against them. You can use this feature to measure the accuracy of a baseline model versus a custom model.
-
-**Q: How will I know when processing for my dataset or model is complete?**
-
-**A**: Currently, the status of the model or dataset in the table is the only way to know. When the processing is complete, the status is **Succeeded**.
-
-**Q: Can I create more than one model?**
-
-**A**: There's no limit on the number of models you can have in your collection.
-
-**Q: I realized I made a mistake. How do I cancel my data import or model creation thatΓÇÖs in progress?**
-
-**A**: Currently, you can't roll back an acoustic or language adaptation process. You can delete imported data and models when they're in a terminal state.
-
-**Q: I get several results for each phrase with the detailed output format. Which one should I use?**
-
-**A**: Always take the first result, even if another result ("N-Best") might have a higher confidence value. The Speech service considers the first result to be the best. It can also be an empty string if no speech was recognized.
-
-The other results are likely worse and might not have full capitalization and punctuation applied. These results are most useful in special scenarios such as giving users the option to pick corrections from a list or handling incorrectly recognized commands.
-
-**Q: Why are there different base models?**
-
-**A**: You can choose from more than one base model in the Speech service. Each model name contains the date when it was added. When you start training a custom model, use the latest model to get the best accuracy. Older base models are still available for some time when a new model is made available. You can continue using the model that you have worked with until it is retired (see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)). It is still recommended to switch to the latest base model for better accuracy.
-
-**Q: Can I update my existing model (model stacking)?**
-
-**A**: You can't update an existing model. As a solution, combine the old dataset with the new dataset and readapt.
-
-The old dataset and the new dataset must be combined in a single .zip file (for acoustic data) or in a .txt file (for language data). When adaptation is finished, the new, updated model needs to be redeployed to obtain a new endpoint
-
-**Q: When a new version of a base model is available, is my deployment automatically updated?**
-
-**A**: Deployments will NOT be automatically updated.
-
-If you have adapted and deployed a model, that deployment will remain as is. You can decommission the deployed model, readapt using the newer version of the base model and redeploy for better accuracy.
-
-Both base models and custom models will be retired after some time (see [Model and endpoint lifecycle](./how-to-custom-speech-model-and-endpoint-lifecycle.md)).
-
-**Q: Can I download my model and run it locally?**
-
-**A**: You can run a custom model locally in a [Docker container](speech-container-howto.md?tabs=cstt).
-
-**Q: Can I copy or move my datasets, models, and deployments to another region or subscription?**
-
-**A**: You can use the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy a custom model to another region or subscription. Datasets or deployments cannot be copied. You can import a dataset again in another subscription and create endpoints there using the model copies.
-
-**Q: Are my requests logged?**
-
-**A**: By default requests are not logged (neither audio, nor transcription). If necessary, you may select *Log content from this endpoint* option when you [create a custom endpoint](how-to-custom-speech-train-model.md#deploy-a-custom-model). You can also enable audio logging in the [Speech SDK](how-to-use-logging.md) on a per-request basis without creating a custom endpoint. In both cases, audio and recognition results of requests will be stored in secure storage. For subscriptions that use Microsoft-owned storage, they will be available for 30 days.
-
-You can export the logged files on the deployment page in Speech Studio if you use a custom endpoint with *Log content from this endpoint* enabled. If audio logging is enabled via the SDK, call the [API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetBaseModelLogs) to access the files.
-
-**Q: Are my requests throttled?**
-
-**A**: See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md).
-
-**Q: How am I charged for dual channel audio?**
-
-**A**: If you submit each channel separately (each channel in its own file), you will be charged for the duration of each file. If you submit a single file with each channel multiplexed together, then you will be charged for the duration of the single file. For details on pricing please refer to the [Azure Cognitive Services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
-
-> [!IMPORTANT]
-> If you have further privacy concerns that prohibit you from using the custom Speech service, contact one of the support channels.
-
-## Increasing concurrency
-See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md).
--
-## Importing data
-
-**Q: What is the limit on the size of a dataset, and why is it the limit?**
-
-**A**: The limit is due to the restriction on the size of a file for HTTP upload. See [Speech Services Quotas and Limits](speech-services-quotas-and-limits.md) for the actual limit. You can split your data into multiple datasets and select all of them to train the model.
-
-**Q: Can I zip my text files so I can upload a larger text file?**
-
-**A**: No. Currently, only uncompressed text files are allowed.
-
-**Q: The data report says there were failed utterances. What is the issue?**
-
-**A**: Failing to upload 100 percent of the utterances in a file is not a problem. If the vast majority of the utterances in an acoustic or language dataset (for example, more than 95 percent) are successfully imported, the dataset can be usable. However, we recommend that you try to understand why the utterances failed and fix the problems. Most common problems, such as formatting errors, are easy to fix.
-
-## Creating an acoustic model
-
-**Q: How much acoustic data do I need?**
-
-**A**: We recommend starting with between 30 minutes and one hour of acoustic data.
-
-**Q: What data should I collect?**
-
-**A**: Collect data that's as close to the application scenario and use case as possible. The data collection should match the target application and users in terms of device or devices, environments, and types of speakers. In general, you should collect data from as broad a range of speakers as possible.
-
-**Q: How should I collect acoustic data?**
-
-**A**: You can create a standalone data collection application or use off-the-shelf audio recording software. You can also create a version of your application that logs the audio data and then uses the data.
-
-**Q: Do I need to transcribe adaptation data myself?**
-
-**A**: Yes. You can transcribe it yourself or use a professional transcription service. Some users prefer professional transcribers and others use crowdsourcing or do the transcriptions themselves.
-
-**Q: How long will it take to train a custom model with audio data?**
-
-**A**: Training a model with audio data can be a lengthy process. Depending on the amount of data, it can take several days to create a custom model. If it cannot be finished within one week, the service might abort the training operation and report the model as failed.
-
-Use one of the [regions](custom-speech-overview.md#set-up-your-azure-account) where dedicated hardware is available for training. The Speech service will use up to 20 hours of audio for training in these regions. In other regions, it will only use up to 8 hours.
-
-In general, the service processes approximately 10 hours of audio data per day in regions with dedicated hardware. It can only process about 1 hour of audio data per day in other regions. You can copy the fully trained model to another region using the [REST API](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription). Training with just text is much faster and typically finishes within minutes.
-
-Some base models cannot be customized with audio data. For them the service will just use the text of the transcription for training and ignore the audio data. Training will then be finished much faster and results will be the same as training with just text. See [Language support](language-support.md#speech-to-text) for a list of base models that support training with audio data.
-
-## Accuracy testing
-
-**Q: What is word error rate (WER) and how is it computed?**
-
-**A**: WER is the evaluation metric for speech recognition. WER is counted as the total number of errors,
-which includes insertions, deletions, and substitutions, divided by the total number of words in the reference transcription. For more information, see [Evaluate Custom Speech accuracy](how-to-custom-speech-evaluate-data.md#evaluate-custom-speech-accuracy).
-
-**Q: How do I determine whether the results of an accuracy test are good?**
-
-**A**: The results show a comparison between the baseline model and the model you customized. You should aim to beat the baseline model to make customization worthwhile.
-
-**Q: How do I determine the WER of a base model so I can see if there was an improvement?**
-
-**A**: The offline test results show the baseline accuracy of the custom model and the improvement over baseline.
-
-## Creating a language model
-
-**Q: How much text data do I need to upload?**
-
-**A**: It depends on how different the vocabulary and phrases used in your application are from the starting language models. For all new words, it's useful to provide as many examples as possible of the usage of those words. For common phrases that are used in your application, including phrases in the language data is also useful because it tells the system to also listen for these terms. It's common to have at least 100, and typically several hundred or more utterances in the language dataset. Also, if some types of queries are expected to be more common than others, you can insert multiple copies of the common queries in the dataset.
-
-**Q: Can I just upload a list of words?**
-
-**A**: Uploading a list of words will add the words to the vocabulary, but it won't teach the system how the words are typically used. By providing full or partial utterances (sentences or phrases of things that users are likely to say), the language model can learn the new words and how they are used. The custom language model is good not only for adding new words to the system, but also for adjusting the likelihood of known words for your application. Providing full utterances helps the system learn better.
-
-## Tenant Model (Custom Speech with Microsoft 365 data)
-
-**Q: What information is included in the Tenant Model, and how is it created?**
-
-**A:** A Tenant Model is built using [public group](https://support.microsoft.com/office/learn-about-microsoft-365-groups-b565caa1-5c40-40ef-9915-60fdb2d97fa2) emails and documents that can be seen by anyone in your organization.
-
-**Q: What speech experiences are improved by the Tenant Model?**
-
-**A:** When the Tenant Model is enabled, created and published, it is used to improve recognition for any enterprise applications built using the Speech service; that also pass a user Azure AD token indicating membership to the enterprise.
-
-The speech experiences built into Microsoft 365, such as Dictation and PowerPoint Captioning, aren't changed when you create a Tenant Model for your Speech service applications.
-
-## Next steps
--- [Troubleshooting](troubleshooting.md)-- [Release notes](releasenotes.md)
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/containers/container-image-tags.md
Container images have the following tags available:
# [Latest version](#tab/current)
+Release notes for `v2.1`:
+
+Form Recognizer containers are currently in gated preview. To use them, you must submit an [online request](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUNlpBU1lFSjJUMFhKNzVHUUVLN1NIOEZETiQlQCN0PWcu) and receive approval.
+ | Container | Tags | ||:| | **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview` </br> &bullet; `2.1.0.016140001-08108749-amd64-preview`|
cognitive-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/containers/form-recognizer-container-install-run.md
keywords: on-premises, Docker, container, identify
Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents and output structured data that includes the relationships in the original file.
-In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by seven Form Recognizer containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom API**, and **Custom Supervised**ΓÇöplus the **Read** OCR container. The **Read** container allows you to extract printed and handwritten text from images and documents with support for JPEG, PNG, BMP, PDF, and TIFF file formats. For more information, see the [Read API how-to guide](../../computer-vision/vision-api-how-to-topics/call-read-api.md).
+In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by seven Form Recognizer containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, **Custom API**, and **Custom Supervised** (for reciept, business cards and ID Document containers you will also need the **Read** OCR container).
## Prerequisites
For more information about these options, see [Configure containers](form-recogn
That's it! In this article, you learned concepts and workflows for downloading, installing, and running Form Recognizer containers. In summary: * Form Recognizer provides seven Linux containers for Docker.
-* Container images are downloaded from the private container registry in Azure.
+* Container images are downloaded from mcr.
* Container images run in Docker. * You must specify the billing information when you instantiate a container.
That's it! In this article, you learned concepts and workflows for downloading,
## Next steps
-* Review [Configure containers](form-recognizer-container-configuration.md) for configuration settings.
-* Use more [Cognitive Services Containers](../../cognitive-services-container-support.md).
+* [Form Recognizer container configuration settings](form-recognizer-container-configuration.md)
+* [Form Recognizer container image tags](../../containers/container-image-tags.md?tabs=current#form-recognizer)
+* [Cognitive Services container support page and release notes](../../containers/container-image-tags.md?tabs=current#form-recognizer)
communication-services Call Automation Apis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-automation-apis.md
# Call Automation APIs overview--
-Call Automation APIs enable organizations to connect with their customers or employees at scale through automated business logic. You can use these APIs to create automated outbound reminder calls for appointments or to provide proactive notifications for events like power outages or wildfires. Applications added to a call can monitor updates as participants join or leave, allowing you to implement rich reporting and logging capabilities.
-
-## In-Call APIs
-
+Call Automation APIs allow you to connect with your users at scale through automated business logic. You can use these APIs to create automated outbound reminder calls for appointments or to provide notifications for events like power outages or wildfires. Applications added to a call can monitor updates as participants join or leave, allowing you to implement reporting and logging.
+
+![in and out-of-call apps](../media/call-automation-apps.png)
+
+Call Automation APIs are provided for both in-call (application-participant or app-participant) actions, and out-of-call actions. Two key differences between these sets of APIs are:
+- In-call APIs require your application to join the call as a participant. App-participants are billed at [standard PSTN and VoIP rates](https://azure.microsoft.com/pricing/details/communication-services/).
+- In-call APIs use the `callConnectionId` associated with app-participant, while Out-of-Call APIs use the `serverCallId` associated with the call instance.
+
+## Use cases
+| Use Case | In-Call (App-participant) | Out-of-Call |
+| | - | - |
+| Place or receive 1:1 calls between bot and human participants | X | |
+| Play audio prompts and listen for responses | X | |
+| Monitor in-call events | X | |
+| Create calls with multiple participants | X | |
+| Get call participants and participant details | X | |
+| Add or remove call participants | X | X |
+| Server-side actions in peer-to-peer calls (e.g. recording) | | X |
+| Play audio announcements to all participants | | X |
+| Start and manage call recording | | X |
+
+## In-Call (App-Participant) APIs
> [!NOTE]
-> In-Call applications are billed as call participants at standard PSTN and VoIP rates.
-
-
-### Create call
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "create-call"
-}-->
-```
-POST /calling/calls?api-version={api-version}
-Content-Type: application/json
-
-{
- "source: {
- "communicationUser": {
- "id": "string"
- }
- },
- "targets": [
- "communicationUser": {
- "id": "string"
- }
- ],
- "subject": "string",
- "callbackUri": "string",
- "requestedModalities": [
- "audio"
- ],
- "requestedCallEvents": [
- "participantsUpdated"
- ]
-}
-```
-**C# SDK**
-
-```C#
-// Create call client
-var connectionString = "YOUR_CONNECTION_STRING";
-var callClient = new CallClient(connectionString);
-
-//Preparing request data
-var source = new CommunicationUserIdentifier("<source-identity e.g. 8:acs:guid_guid>");
-var targets = new List<CommunicationIdentifier>()
-{
- new PhoneNumberIdentifier("<phone-number e.g. +14251001000>"),
- new CommunicationUserIdentifier("<communication-user-identity e.g. 8:acs:guid_guid>")
-};
-var createCallOptions = new CreateCallOptions(
- new Uri("<callback-url>"),
- new List<CallModality> { CallModality.Audio },
- new List<EventSubscritionType> { EventSubscritionType.ParticipantsUpdated, EventSubscritionType.DtmfReceived });
-
-//phone number associated with the resource
-createCallOptions.AlternateCallerId = new PhoneNumberIdentifier("<phone-number>");
-
-//Starting the call
-var call = await callClient.CreateCallAsync(source, targets, createCallOption).ConfigureAwait(false);
-
-string callLegId = call.Value.CallLegId;
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 200 Success
-Content-Type: application/json
-
-{
- "callLegId": "string"
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-### End a call
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "hangup-call"
-}-->
-```
-POST /calling/calls/{callId}/Hangup?api-version={api-version}
-Content-Type: application/json
-
-```
-**C# SDK**
-
-```C#
-await callClient.HangupCallAsync("<call-leg-id>").ConfigureAwait(false);
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 202 Accepted
-Content-Type: application/json
-
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-### Play audio in call
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "play-audio"
-}-->
-```
-POST /calling/calls/{callId}/PlayAudio?api-version={api-version}
-Content-Type: application/json
-
-{
- "audioFileUri": "string",
- "loop": true,
- "operationContext": "string",
- "resourceId": "string"
-}
-```
-**C# SDK**
-
-```C#
-// Preparing data for play audio request
-var playAudioRequest = new PlayAudioRequest()
-{
- AudioFileUri = "<audio-file-url",
- OperationContext = "<operation-context e.g. guid>",
- Loop = <true|false>,
- ResourceId = "<resource-id e.g. guid>"
-};
-
-var response = await callClient.PlayAudioAsync("<call-leg-id>", playAudioRequest).ConfigureAwait(false);
-
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 202 Success
-Content-Type: application/json
-
-{
- "id": "string",
- "status": "notStarted",
- "operationContext": "string",
- "resultInfo": {
- "code": 0,
- "subcode": 0,
- "message": "string"
- }
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-### Cancel media processing
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "cancel-media-processing"
-}-->
-```
-POST /calling/calls/{callId}/CancelMediaProcessing?api-version={api-version}
-Content-Type: application/json
-
-{
- "operationContext": "string"
-}
-```
-**C# SDK**
-
-```C#
-await callClient.CancelMediaProcessingAsync("<call-leg-id>", "<operation-context>").ConfigureAwait(false);
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 202 Accepted
-Content-Type: application/json
-
-{
- "id": "string",
- "status": "notStarted",
- "operationContext": "string",
- "resultInfo": {
- "code": 0,
- "subcode": 0,
- "message": "string"
- }
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-### Invite participant
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "invite-participant "
-}-->
-```
-POST /calling/calls/{callId}/participants?api-version={api-version}
-Content-Type: application/json
-
-{
- "alternateCallerId": {
- "value": "<phone-number>"
- }
- "participants": [
- {
- "communicationUser": {
- "id": "<communication-user-identity>"
- }
- },
- {
- "phoneNumber": {
- "value": "<phone-number>"
- }
- }
- ],
- "operationContext": "string"
-}
-```
-**C# SDK**
-```C#
-var invitedParticipants = new List<CommunicationIdentifier>()
-{
- new CommunicationUserIdentifier("<communication-user-identity>"),
- new PhoneNumberIdentifier("<phone-number>")
-};
-
-//Alternate phone number required when inviting phone number
-var alernateCallerId = "<phone-number>";
-
-await callClient.InviteParticipantsAsync("<call-leg-id>", invitedParticipants, "<operation-context>", alernateCallerId).ConfigureAwait(false);
-
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 202 Accepted
-Content-Type: application/json
-
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-### Remove participant
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "remove-participant "
-}-->
-```
-DELETE /calling/calls/{callId}/participants/{participantId}?api-version={api-version}
-Content-Type: application/json
-
-```
-**C# SDK**
-
-```C#
-await callClient.RemoveParticipantAsync("<call-leg-id>", "<participant-id>").ConfigureAwait(false);
-
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 202 Accepted
-Content-Type: application/json
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-## In-Call Events
-Event notifications are sent as JSON payloads to the calling application via the `callbackUri`set during the create call request.
-
-### CallState Event - Establishing
-```
-{
- "id": null,
- "topic": null,
- "subject": "callLeg/531f3600-481f-41c8-8a75-3e8b2e8e6200/callState",
- "data": {
- "ConversationId": null,
- "CallLegId": "531f3600-481f-41c8-8a75-3e8b2e8e6200",
- "CallState": "Establishing"
- },
- "eventType": "Microsoft.Communication.CallLegStateChanged",
- "eventTime": "2021-05-05T20:08:39.0157964Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-### CallState Event - Established
-```
-{
- "id": null,
- "topic": null,
- "subject": "callLeg/531f3600-481f-41c8-8a75-3e8b2e8e6200/callState",
- "data": {
- "ConversationId": "aHR0cHM6Ly9jb252LXVzc2MtMDIuY29udi5za3lwZS5jb20vY29udi92RFNacTFyTEIwdVotM0dQdjBabUpnP2k9OCZlPTYzNzU1NzQzNzg4NTgzMTgxMQ",
- "CallLegId": "531f3600-481f-41c8-8a75-3e8b2e8e6200",
- "CallState": "Established"
- },
- "eventType": "Microsoft.Communication.CallLegStateChanged",
- "eventTime": "2021-05-05T20:08:59.5783985Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-
-### CallState Event - Terminating
-```
-{
- "id": null,
- "topic": null,
- "subject": "callLeg/531f3600-481f-41c8-8a75-3e8b2e8e6200/callState",
- "data": {
- "ConversationId": "aHR0cHM6Ly9jb252LXVzc2MtMDIuY29udi5za3lwZS5jb20vY29udi92RFNacTFyTEIwdVotM0dQdjBabUpnP2k9OCZlPTYzNzU1NzQzNzg4NTgzMTgxMQ",
- "CallLegId": "531f3600-481f-41c8-8a75-3e8b2e8e6200",
- "CallState": "Terminating"
- },
- "eventType": "Microsoft.Communication.CallLegStateChanged",
- "eventTime": "2021-05-05T20:13:45.7398707Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-
-### CallState Event - Terminated
-```
-{
- "id": null,
- "topic": null,
- "subject": "callLeg/531f3600-481f-41c8-8a75-3e8b2e8e6200/callState",
- "data": {
- "ConversationId": "aHR0cHM6Ly9jb252LXVzc2MtMDIuY29udi5za3lwZS5jb20vY29udi92RFNacTFyTEIwdVotM0dQdjBabUpnP2k9OCZlPTYzNzU1NzQzNzg4NTgzMTgxMQ",
- "CallLegId": "531f3600-481f-41c8-8a75-3e8b2e8e6200",
- "CallState": "Terminated"
- },
- "eventType": "Microsoft.Communication.CallLegStateChanged",
- "eventTime": "2021-05-05T20:13:46.1541814Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-
-### DTMF Received Event
-```
-{
- "id": null,
- "topic": null,
- "subject": "callLeg/471f3600-4e1f-4cd4-9eec-4a484e4cbf00/dtmf",
- "data": {
- "ToneInfo": {
- "SequenceId": 1,
- "Tone": "Tone1"
- },
- "CallLegId": "471f3600-4e1f-4cd4-9eec-4a484e4cbf00"
- },
- "eventType": "Microsoft.Communication.DtmfReceived",
- "eventTime": "2021-05-05T20:31:00.4818813Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-
-### PlayAudioResult Event
-```
-{
- "id": null,
- "topic": null,
- "subject": "callLeg/511f3600-401f-4296-b6d0-b8da6f343b00/playAudio",
- "data": {
- "ResultInfo": {
- "Code": 200,
- "Subcode": 0,
- "Message": "Action completed successfully."
- },
- "OperationContext": "6c6cbbc7-66b2-47a8-a29c-5e5f73aee86d",
- "Status": "Completed",
- "CallLegId": "511f3600-401f-4296-b6d0-b8da6f343b00"
- },
- "eventType": "Microsoft.Communication.PlayAudioResult",
- "eventTime": "2021-05-05T20:38:22.0476663Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-
-### Cancel media processing Event
-```
-{
- "id": null,
- "topic": null,
- "subject": "callLeg/471f3600-4e1f-4cd4-9eec-4a484e4cbf00/playAudio",
- "data": {
- "ResultInfo": {
- "Code": 400,
- "Subcode": 8508,
- "Message": "Action falied, the operation was cancelled."
- },
- "OperationContext": "d8aeabf7-47a0-4803-b0cc-6059a708440d",
- "Status": "Completed",
- "CallLegId": "471f3600-4e1f-4cd4-9eec-4a484e4cbf00"
- },
- "eventType": "Microsoft.Communication.PlayAudioResult",
- "eventTime": "2021-05-05T20:31:01.2789071Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-
-### Invite Participant result Event
-```
-{
- "id": "52154ee2-b2ba-420f-b42f-a69c6101c516",
- "topic": null,
- "subject": "callLeg/421f6d00-18fc-4d11-bde6-e5e371494753/inviteParticipantResult",
- "data": {
- "ResultInfo": null,
- "OperationContext": "5dbcbdd4-febf-4091-a5be-543f09b2692c",
- "Status": "Completed",
- "CallLegId": "421f6d00-18fc-4d11-bde6-e5e371494753",
- "Participants": [
- {
- "RawId": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-de04-ee58-740a-113a0d00330d",
- "CommunicationUser": {
- "Id": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-de04-ee58-740a-113a0d00330d"
- },
- "PhoneNumber": null,
- "MicrosoftTeamsUser": null
- }
- ]
- },
- "eventType": "Microsoft.Communication.InviteParticipantResult",
- "eventTime": "2021-05-05T21:49:52.8138396Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-
-### Participants Updated Event
-```
-{
- "id": null,
- "topic": null,
- "subject": "callLeg/411f6d00-088a-4ee4-a7bf-c064ac10afeb/participantsUpdated",
- "data": {
- "CallLegId": "411f6d00-088a-4ee4-a7bf-c064ac10afeb",
- "Participants": [
- {
- "Identifier": {
- "RawId": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-7904-f8c2-51b9-a43a0d0010d9",
- "CommunicationUser": {
- "Id": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-7904-f8c2-51b9-a43a0d0010d9"
- },
- "PhoneNumber": null,
- "MicrosoftTeamsUser": null
- },
- "ParticipantId": "de7539f7-019e-4934-a4c9-9a770e5e07bb",
- "IsMuted": false
- },
- {
- "Identifier": {
- "RawId": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-547e-c56e-71bf-a43a0d002dc1",
- "CommunicationUser": {
- "Id": "8:acs:016a7064-0581-40b9-be73-6dde64d69d72_00000009-547e-c56e-71bf-a43a0d002dc1"
- },
- "PhoneNumber": null,
- "MicrosoftTeamsUser": null
- },
- "ParticipantId": "16c3518f-5ff5-4989-8073-39255a71fb58",
- "IsMuted": false
- }
- ]
- },
- "eventType": "Microsoft.Communication.ParticipantsUpdated",
- "eventTime": "2021-04-16T06:26:37.9121542Z",
- "metadataVersion": null,
- "dataVersion": null
-}
-```
-## Out-of-Call APIs
+> In-Call applications are billed as call participants at [standard PSTN and VoIP rates](https://azure.microsoft.com/pricing/details/communication-services/).
> [!NOTE]
-> The conversationID in all Out-of-Call APIs can be either conversationID gotten from client, or the groupID of a group call.
-
-### Join call
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "join-call"
-}-->
-```
-POST /calling/conversations/{conversationId}/join?api-version={api-version}
-Content-Type: application/json
-
-{
- "source: {
- "communicationUser": {
- "id": "string"
- }
- },
- "subject": "string",
- "callbackUri": "string",
- "requestedModalities": [
- "audio"
- ],
- "requestedCallEvents": [
- "participantsUpdated"
- ]
-}
-```
-**C# SDK**
+> In-Call actions are attributed to the App-participant associated with the `callConnectionId` used in the API call.
-```C#
-// Create conversation client
-var connectionString = "YOUR_CONNECTION_STRING";
-var conversationClient = new ConversationClient(connectionString);
+In-call APIs enable an application to take actions in a call as an app-participant. When an application answers or joins a call, a `callConnectionId` is assigned, which is used for in-call actions such as:
+- Add or remove call participants
+- Play audio prompts and listen for DTMF responses
+- Listen to call roster updates and events
+- Hang-up a call
-//Preparing request data
-var source = new CommunicationUserIdentifier("<source-identity e.g. 8:acs:guid_guid>");
-var createCallOptions = new CreateCallOptions(
- new Uri("<callback-url>"),
- new List<CallModality> { CallModality.Audio },
- new List<EventSubscritionType> { EventSubscritionType.ParticipantsUpdated, EventSubscritionType.DtmfReceived });
+![in-call application](../media/call-automation-in-call.png)
-//Conversation id for the call. Can be a conversation id or a group id.
-var conversationId = "<group-id or base64-encoded-conversation-url>";
+### In-Call Events
+Event notifications are sent as JSON payloads to the calling application via the `callbackUri` set when joining the call. Events sent to in-call app-participants are:
+- CallState events (Establishing, Established, Terminating, Terminated)
+- DTMF received
+- Play audio result
+- Cancel media processing
+- Invite participant result
+- Participants updated
-//Starting the call
-var call = await conversationClient.JoinCallAsync(conversationId, source, createCallOption).ConfigureAwait(false);
-string callLegId = call.Value.CallLegId;
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 200 Success
-Content-Type: application/json
-
-{
- "callLegId": "string"
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-
-### Invite participants
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "invite-participant "
-}-->
-```
-POST /calling/conversations/{conversationId}/participants?api-version={api-version}
-Content-Type: application/json
-
-{
- "participants": [
- {
- "communicationUser": {
- "id": "string"
- }
- }
- ],
- "operationContext": "string"
-}
-```
-**C# SDK**
-```C#
-var invitedParticipants = new List<CommunicationIdentifier>()
-{
- new CommunicationUserIdentifier("<communication-user-identity>")
-};
-
-await conversationClient.InviteParticipantsAsync("<conversation-id>", invitedParticipants, new Uri("<callback-url>"), "<operation-context>").ConfigureAwait(false);
-
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 202 Accepted
-Content-Type: application/json
-
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-### Remove participant
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "remove-participant "
-}-->
-```
-DELETE /calling/conversations/{conversationId}/participants/{participantId}?api-version={api-version}
-Content-Type: application/json
-
-```
-**C# SDK**
-
-```C#
-await conversationClient.RemoveParticipantAsync("<conversationId>", "<participant-id>").ConfigureAwait(false);
-
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 202 Accepted
-Content-Type: application/json
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 401 Unauthorized
-Content-Type: application/json
+## Out-of-Call APIs
+Out-of-Call APIs enable you to perform actions on a call or meeting without having an app-participant present. Out-of-Call APIs use the `serverCallId`, provided by either the Calling Client SDK or generated when a call is created via the Call Automation APIs. Because out-of-call APIs do not require an app-participant, they are useful for implementing server-side business logic into peer-to-peer calls. For example, consider a support call scenario that started as a peer-to-peer call, and the agent (participant B) wants to bring in a subject matter expert to assist. Participant B triggers an event in the client interface for a server application to identify an available subject matter expert and invite them to the call. The end-state of the flow shown below is a group call with three human participants.
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
+![out-of-call application](../media/call-automation-out-of-call.png)
-{
- "code": "<error-code>",
- "message": "<error-message>",
-}
-```
+Out-of-call APIs are available for actions such as:
+- Add or remove call participants
+- Start/stop/pause/resume call recording
+
+### Out-of-Call Events
+Event notifications are sent as JSON payloads to the calling application via the `callbackUri` providing in the originating API call. Actions with corresponding out-of-call events are:
+- Call Recording (Start, Stop, Pause, Resume)
+- Invite participant result
## Next steps
-Check out our [sample](https://github.com/Azure/communication-preview/tree/master/samples/Server-Calling/IncidentReporter) to learn more.
+Check out the [Call Automation Quickstart Sample](../../quickstarts/voice-video-calling/call-automation-api-sample.md) to learn more.
+
+Learn more about [Call Recording](./call-recording.md).
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/call-recording.md
Last updated 04/13/2021 + # Calling Recording overview - > [!NOTE] > Many countries and states have laws and regulations that apply to the recording of PSTN, voice, and video calls, which often require that users consent to the recording of their communications. It is your responsibility to use the call recording capabilities in compliance with the law. You must obtain consent from the parties of recorded communications in a manner that complies with the laws applicable to each participant. > [!NOTE] > Regulations around the maintenance of personal data require the ability to export user data. In order to support these requirements, recording metadata files include the participantId for each call participant in the `participants` array. You can cross-reference the MRIs in the `participants` array with your internal user identities to identify participants in a call. An example of a recording metadata file is provided below for reference.
-Call Recording provides a set of APIs to start, stop, pause and resume recording. These APIs can be accessed from server-side business logic or via events triggered by user actions. Recorded media output is in `MP4 Audio+Video` format, which is the same format that Teams uses to record media. Notifications related to media and metadata are emitted via Event Grid. Recordings are stored for 48 hours on built-in temporary storage for retrieval and movement to a long-term storage solution of choice.
-
-## Run-time Control APIs
-Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation, or from a user-triggered action that tells the server application to start recording. In either scenario, `<conversation-id>` is required to record a specific meeting or call.
-
-#### Getting Conversation ID from a server initiated call
-
-A `ConversationId` is returned via the `Microsoft.Communication.CallLegStateChanged` event. This event notification is emitted after a call has been established. It can be found in the `data.ConversationId` field. This value can be used directly as the `{conversationId}` parameter in run-time control APIs:
-```
- {
- "id": null,
- "topic": null,
- "subject": "callLeg/<callLegId>/callState",
- "data": {
--> "ConversationId": "<conversation-id>", <-
- "CallLegId": "<callLegId>",
- "CallState": "Established"
- },
- "eventType": "Microsoft.Communication.CallLegStateChanged",
- "eventTime": "2021-04-14T16:32:34.1115003Z",
- "metadataVersion": null,
- "dataVersion": null
- }
-```
-
-#### Getting the conversation ID from a user triggered event on the client
-
-From the JavaScript `@azure/communication-calling` library, after establishing a call invoke `let result = call.info.getConversationUrl()` to get the `conversationUrl`, then
-**Base64Url encode the `conversationUrl` to get the `{conversationId}` for use in the run-time control APIs**. Encoding can be done either on the client before sending the event to the server, or server side.
-
-Note that the `conversationUrl` *must* be Base64Url encoded, not to be confused with just Base64 encoding (i.e. btoa).
-
-### Start recording
-
-#### Request
-
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "start-recording"
-}-->
-```
-POST /conversations/{conversationId}/Recordings
-Content-Type: application/json
-
-{
- "operationContext": "string", // developer provided string for correlation context on each operation
- "recordingStateCallbackUri": "string"
-}
-```
-**C# SDK**
-<!-- {
- "blockType": "request",
- "name": "start-recording"
-}-->
-```C#
-string connectionString = "YOUR_CONNECTION_STRING";
-ConversationClient conversationClient = new ConversationClient(connectionString);
-
-/// start call recording
-StartRecordingResponse startRecordingResponse = await conversationClient.StartRecordingAsync(
- conversationId: "<conversation-id>"
- operationContext: "<operation-context>", /// developer provided string for correlation context on each operation
- recordingStateCallbackUri: "<recording-state-callback-uri>").ConfigureAwait(false);
-
-string recordingId = startRecordingResponse.RecordingId;
-```
-
-#### Response
-
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 200 Success
-Content-Type: application/json
-
-{
- "recordingId": "string"
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-```
-HTTP/1.1 404 Not found
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-
-### Get call recording state
-
-#### Request
-
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "get-recording-state"
-}-->
-```http
-GET /conversations/{conversationId}/recordings/{recordingId}
-Content-Type: application/json
-
-{
-}
-```
-**C# SDK**
-<!-- {
- "blockType": "request",
- "name": "start-recording"
-}-->
-```C#
-string connectionString = "YOUR_CONNECTION_STRING";
-ConversationClient conversationClient = new ConversationClient(connectionString);
-
-/// get recording state
-GetCallRecordingStateResponse recordingState = await conversationClient.GetRecordingStateAsync(
- conversationId: "<conversation-id>",
- recordingId: <recordingId>).ConfigureAwait(false);
-```
-#### Response
-
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 200 Success
-Content-Type: application/json
-
-{
- "recordingState": "active"
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-
-### Stop recording
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "stop-recording"
-}-->
-```
-DELETE /conversations/{conversationId}/recordings/{recordingId}
-Content-Type: application/json
-
-{
- "operationContext": "string" // developer provided string for correlation context on each operation
-}
-```
-**C# SDK**
-<!-- {
- "blockType": "request",
- "name": "start-recording"
-}-->
-```C#
-string connectionString = "YOUR_CONNECTION_STRING";
-ConversationClient conversationClient = new ConversationClient(connectionString);
-
-/// stop recording
-StopRecordingResponse response = conversationClient.StopRecordingAsync(
- conversationId: "<conversation-id>",
- recordingId: <recordingId>,
- operationContext: "<operation-context>").ConfigureAwait(false);
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 200 Success
-Content-Type: application/json
-
-{
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-
-### Pause recording
-Pausing and resuming call recording enables you to skip recording a portion of a call or meeting, and resume recording to a single file.
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "pause-recording"
-}-->
-```
-POST /conversations/{conversationId}/recordings/{recordingId}/Pause
-Content-Type: application/json
-
-{
- "operationContext": "string" // developer provided string for correlation context on each operation
-}
-```
-**C# SDK**
-<!-- {
- "blockType": "request",
- "name": "start-recording"
-}-->
-```C#
-string connectionString = "YOUR_CONNECTION_STRING";
-ConversationClient conversationClient = new ConversationClient(connectionString);
-
-/// pause recording
-PauseRecordingResponse response = conversationClient.PauseRecordingAsync(
- conversationId: "<conversation-id>",
- recordingId: <recordingId>,
- operationContext: "<operation-context>").ConfigureAwait(false);
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 200 Success
-Content-Type: application/json
-
-{
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
-
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-
-### Resume recording
-#### Request
-**HTTP**
-<!-- {
- "blockType": "request",
- "name": "resume-recording"
-}-->
-```
-POST /conversations/{conversationId}/recordings/{recordingId}/Resume
-Content-Type: application/json
-
-{
- "operationContext": "string" // developer provided string for correlation context on each operation
-}
-```
-**C# SDK**
-<!-- {
- "blockType": "request",
- "name": "start-recording"
-}-->
-```C#
-string connectionString = "YOUR_CONNECTION_STRING";
-ConversationClient conversationClient = new ConversationClient(connectionString);
-
-/// resume recording
-ResumeRecordingResponse response = conversationClient.ResumeRecordingAsync(
- conversationId: "<conversation-id>",
- recordingId: <recordingId>,
- operationContext: "<operation-context>").ConfigureAwait(false);
-```
-#### Response
-**HTTP**
-<!-- {
- "blockType": "response",
- "truncated": true,
-} -->
-
-```http
-HTTP/1.1 200 Success
-Content-Type: application/json
-
-{
-}
-```
-```
-HTTP/1.1 400 Bad request
-Content-Type: application/json
+> [!NOTE]
+> Call Recording is currently only available for Communication Services resources created in the US region.
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
-```
-HTTP/1.1 500 Internal server error
-Content-Type: application/json
+Call Recording provides a set of APIs to start, stop, pause and resume recording. These APIs can be accessed from server-side business logic or via events triggered by user actions. Recorded media output is in MP4 Audio+Video format, which is the same format that Teams uses to record media. Notifications related to media and metadata are emitted via Event Grid. Recordings are stored for 48 hours on built-in temporary storage for retrieval and movement to a long-term storage solution of choice.
-{
- "code": "string",
- "message": "string",
- "target": "string",
- "details": [
- null
- ]
-}
-```
+![Call recording concept diagram](../media/call-recording-concept.png)
## Media output types Call recording currently supports mixed audio+video MP4 output format. The output media matches meeting recordings produced via Microsoft Teams recording.
Call recording currently supports mixed audio+video MP4 output format. The outpu
| :-- | :- | :- | : | | audioVideo | mp4 | 1920x1080 8 FPS video of all participants in default tile arrangement | 16kHz mp4a mixed audio of all participants | +
+## Run-time Control APIs
+Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation, or from a user-triggered action that tells the server application to start recording. Call Recording APIs are [Out-of-Call APIs](./call-automation-apis.md#out-of-call-apis), using the `serverCallId` to initiate recording. When creating a call, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. See our [Call Recording Quickstart Sample](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn about retrieving the `serverCallId` from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
+
+| Operation | Operates On