Updates from: 04/01/2023 01:17:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Concepts Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-custom-attributes.md
Azure AD supports adding custom data to resources using [extensions](/graph/exte
- [onPremisesExtensionAttributes](/graph/extensibility-overview?tabs=http#extension-attributes) are a set of 15 attributes that can store extended user string attributes. - [Directory extensions](/graph/extensibility-overview?tabs=http#directory-azure-ad-extensions) allow the schema extension of specific directory objects, such as users and groups, with strongly typed attributes through registration with an application in the tenant.
-Both types of extensions can be configured By using Azure AD Connect for users who are managed on-premises, or MSGraph APIs for cloud-only users.
+Both types of extensions can be configured by using Azure AD Connect for users who are managed on-premises, or Microsoft Graph APIs for cloud-only users.
>[!Note] >The following types of extensions aren't supported for synchronization:
->- Custom Security Attributes in Azure AD (Preview)
->- MSGraph Schema Extensions
->- MSGraph Open Extensions
+>- Custom security attributes in Azure AD (Preview)
+>- Microsoft Graph schema extensions
+>- Microsoft Graph open extensions
## Requirements
To check the backfilling status, click **Azure AD DS Health** and verify the **S
To configure onPremisesExtensionAttributes or directory extensions for cloud-only users in Azure AD, see [Custom data options in Microsoft Graph](/graph/extensibility-overview?tabs=http#custom-data-options-in-microsoft-graph).
-To sync onPremisesExtensionAttributes or directory extensions from on-premises to Azure AD, [configure Azure AD Connect](../active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md).
+To sync onPremisesExtensionAttributes or directory extensions from on-premises to Azure AD, [configure Azure AD Connect](../active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md).
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 03/30/2023 Last updated : 03/31/2023
When the provisioning service is started, the first cycle will:
5. If a matching user is found, it's updated using the attributes provided by the source system. After the user account is matched, the provisioning service detects and caches the target system's ID for the new user. This ID is used to run all future operations on that user.
-6. If the attribute mappings contain "reference" attributes, the service does additional updates on the target system to create and link the referenced objects. For example, a user may have a "Manager" attribute in the target system, which is linked to another user created in the target system.
+6. If the attribute mappings contain "reference" attributes, the service does more updates on the target system to create and link the referenced objects. For example, a user may have a "Manager" attribute in the target system, which is linked to another user created in the target system.
7. Persist a watermark at the end of the initial cycle, which provides the starting point for the later incremental cycles.
After the initial cycle, all other cycles will:
5. If a matching user is found, it's updated using the attributes provided by the source system. If it's a newly assigned account that is matched, the provisioning service detects and caches the target system's ID for the new user. This ID is used to run all future operations on that user.
-6. If the attribute mappings contain "reference" attributes, the service does additional updates on the target system to create and link the referenced objects. For example, a user may have a "Manager" attribute in the target system, which is linked to another user created in the target system.
+6. If the attribute mappings contain "reference" attributes, the service does more updates on the target system to create and link the referenced objects. For example, a user may have a "Manager" attribute in the target system, which is linked to another user created in the target system.
7. If a user that was previously in scope for provisioning is removed from scope, including being unassigned, the service disables the user in the target system via an update.
After the initial cycle, all other cycles will:
> [!NOTE] > You can optionally disable the **Create**, **Update**, or **Delete** operations by using the **Target object actions** check boxes in the [Mappings](customize-application-attributes.md) section. The logic to disable a user during an update is also controlled via an attribute mapping from a field such as *accountEnabled*.
-The provisioning service continues running back-to-back incremental cycles indefinitely, at intervals defined in the [tutorial specific to each application](../saas-apps/tutorial-list.md). Incremental cycles continue until one of the following events occurs:
+The provisioning service continues running back-to-back incremental cycles indefinitely, at intervals defined in the [tutorial specific to each application](../saas-apps/tutorial-list.md). Incremental cycles continue until one of the events occurs:
- The service is manually stopped using the Azure portal, or using the appropriate Microsoft Graph API command.-- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. This action clears any stored watermark and causes all source objects to be evaluated again. This won't break the links between source and target objects. To break the links use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the following request:
+- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. The action clears any stored watermark and causes all source objects to be evaluated again. Also, the action doesn't break the links between source and target objects. To break the links, use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the request:
<!-- { "blockType": "request",
Content-type: application/json
} ``` - A new initial cycle is triggered because of a change in attribute mappings or scoping filters. This action also clears any stored watermark and causes all source objects to be evaluated again.-- The provisioning process goes into quarantine (see below) because of a high error rate, and stays in quarantine for more than four weeks. In this event, the service will be automatically disabled.
+- The provisioning process goes into quarantine (see example) because of a high error rate, and stays in quarantine for more than four weeks. In this event, the service will be automatically disabled.
### Errors and retries
Confirm the mapping for *active* for your application. If your using an applicat
**Configure your application to delete a user**
-The following scenarios will trigger a disable or a delete:
+The scenarios will trigger a disable or a delete:
* A user is soft deleted in Azure AD (sent to the recycle bin / AccountEnabled property set to false). 30 days after a user is deleted in Azure AD, they're permanently deleted from the tenant. At this point, the provisioning service sends a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application. * A user is permanently deleted / removed from the recycle bin in Azure AD.
The following scenarios will trigger a disable or a delete:
By default, the Azure AD provisioning service soft deletes or disables users that go out of scope. If you want to override this default behavior, you can set a flag to [skip out-of-scope deletions.](skip-out-of-scope-deletions.md)
-If one of the above four events occurs and the target application doesn't support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
+If one of the four events occurs and the target application doesn't support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
If you see an attribute IsSoftDeleted in your attribute mappings, it's used to determine the state of the user and whether to send an update request with active = false to soft delete the user. **Deprovisioning events**
-The following table describes how you can configure deprovisioning actions with the Azure AD provisioning service. These rules are written with the non-gallery / custom application in mind, but generally apply to applications in the gallery. However, the behavior for gallery applications can differ as they have been optimized to meet the needs of the application. For example, the Azure AD provisioning service may always sende a request to hard delete users in certain applications rather than soft deleting, if the target application doesn't support soft deleting users.
+The table describes how you can configure deprovisioning actions with the Azure AD provisioning service. These rules are written with the non-gallery / custom application in mind, but generally apply to applications in the gallery. However, the behavior for gallery applications can differ as they've been optimized to meet the needs of the application. For example, the Azure AD provisioning service may always sende a request to hard delete users in certain applications rather than soft deleting, if the target application doesn't support soft deleting users.
|Scenario|How to configure in Azure AD| |--|--|
The following table describes how you can configure deprovisioning actions with
**Known limitations**
-* If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app we will send a disable request. At that point, the user isn't managed by the service and we won't send a delete request when they're deleted from the directory.
+* If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app then a disable request is sent. At that point, the user isn't managed by the service and a delete request isn't sent when the user is deleted from the directory.
* Provisioning a user that is disabled in Azure AD isn't supported. They must be active in Azure AD before they're provisioned. * When a user goes from soft-deleted to active, the Azure AD provisioning service will activate the user in the target app, but won't automatically restore the group memberships. The target application should maintain the group memberships for the user in inactive state. If the target application doesn't support this, you can restart provisioning to update the group memberships.
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
description: Learn how to use system-preferred multifactor authentication
Previously updated : 03/22/2023 Last updated : 03/31/2023
Content-Type: application/json
} ```
-## Known issues
+## Known issue
-- [FIDO2 security key isn't supported on mobile devices](../develop/support-fido2-authentication.md#mobile). This issue might surface when system-preferred MFA is enabled. Until a fix is available, we recommend not using FIDO2 security keys on mobile devices.
+[FIDO2 security keys](../develop/support-fido2-authentication.md#mobile) on mobile devices and [registration for certificate-based authentication (CBA)](concept-certificate-based-authentication.md) aren't supported due to an issue that might surface when system-preferred MFA is enabled. Until a fix is available, we recommend not using FIDO2 security keys on mobile devices or registering for CBA. To disable system-preferred MFA for these users, you can either add them to an excluded group or remove them from an included group.
## Common questions
active-directory How To Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md
If you aren't using SSPR and aren't yet using the Authentication methods policy,
### Review the legacy MFA policy
-Start by documenting which methods are available in the legacy MFA policy. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator). Go to **Azure Active Directory** > **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings** to view the settings. These settings are tenant-wide, so there's no need for user or group information.
+Start by documenting which methods are available in the legacy MFA policy. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator). Go to **Azure Active Directory** > **Users** > **All users** > **Per-user MFA** > **service settings** to view the settings. These settings are tenant-wide, so there's no need for user or group information.
:::image type="content" border="false" source="media/how-to-authentication-methods-manage/legacy-mfa-policy.png" alt-text="Screenshot the shows the legacy Azure AD MFA policy." lightbox="media/how-to-authentication-methods-manage/legacy-mfa-policy.png":::
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
To enable the certificate-based authentication and configure user bindings in th
1. To delete a CA certificate, select the certificate and click **Delete**. 1. Click **Columns** to add or delete columns.
-### Configure certification authorities using PowerShell
+>[!NOTE]
+>Upload of new CAs will fail when any of the existing CAs are expired. Tenant Admin should delete the expired CAs and then upload the new CA.
+
+### Configure certification authorities(CA) using PowerShell
Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can only be HTTP URLs. Online Certificate Status Protocol (OCSP) or Lightweight Directory Access Protocol (LDAP) URLs aren't supported.
Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can
[!INCLUDE [Get-AzureAD](../../../includes/active-directory-authentication-get-trusted-azuread.md)] ### Add
+>[!NOTE]
+>Upload of new CAs will fail when any of the existing CAs are expired. Tenant Admin should delete the expired CAs and then upload the new CA.
+ [!INCLUDE [New-AzureAD](../../../includes/active-directory-authentication-new-trusted-azuread.md)] **AuthorityType**
active-directory Product Privileged Role Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-privileged-role-insights.md
+
+ Title: View privileged role assignments in Azure AD Insights
+description: How to view current privileged role assignments in the Azure AD Insights tab.
+++++++ Last updated : 03/31/2023+++
+# View privileged role assignments in your organization (Preview)
+
+The **Azure AD Insights** tab shows you who is assigned to privileged roles in your organization. You can review a list of identities assigned to a privileged role and learn more about each identity.
+
+> [!NOTE]
+> Microsoft recommends that you keep two break glass accounts permanently assigned to the global administrator role. Make sure that these accounts don't require the same multi-factor authentication mechanism to sign in as other administrative accounts. This is described further in [Manage emergency access accounts in Microsoft Entra](../roles/security-emergency-access.md).
+
+> [!NOTE]
+> Keep role assignments permanent if a user has a an additional Microsoft account (for example, an account they use to sign in to Microsoft services like Skype, or Outlook.com). If you require multi-factor authentication to activate a role assignment, a user with an additional Microsoft account will be locked out.
+
+## View information in the Azure AD Insights tab
+
+1. From the Permissions Management home page, select the **Azure AD Insights** tab.
+2. Select **Review global administrators** to review the list of Global administrator role assignments.
+3. Select **Review highly privileged roles** or **Review service principals** to review information on principal role assignments for the following roles: *Application administrator*, *Cloud Application administrator*, *Exchange administrator*, *Intune administrator*, *Privileged role administrator*, *SharePoint administrator*, *Security administrator*, *User administrator*.
++
+## Next steps
+
+- For information about managing roles, policies and permissions requests in your organization, see [View roles/policies and requests for permission in the Remediation dashboard](ui-remediation.md).
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
Previously updated : 08/16/2022 Last updated : 03/31/2023
Organizations should avoid the following configurations:
The first way is to review the error message that appears. For problems signing in when using a web browser, the error page itself has detailed information. This information alone may describe what the problem is and that may suggest a solution.
-![Sign in error - compliant device required](./media/troubleshoot-conditional-access/image1.png)
+![Screenshot showing a sign in error where a compliant device is required.](./media/troubleshoot-conditional-access/image1.png)
In the above error, the message states that the application can only be accessed from devices or client applications that meet the company's mobile device management policy. In this case, the application and device don't meet that policy.
In the above error, the message states that the application can only be accessed
The second method to get detailed information about the sign-in interruption is to review the Azure AD sign-in events to see which Conditional Access policy or policies were applied and why.
-More information can be found about the problem by clicking **More Details** in the initial error page. Clicking **More Details** will reveal troubleshooting information that is helpful when searching the Azure AD sign-in events for the specific failure event the user saw or when opening a support incident with Microsoft.
+More information can be found about the problem by clicking **More Details** in the initial error page. Clicking **More Details** reveals troubleshooting information that is helpful when searching the Azure AD sign-in events for the specific failure event the user saw or when opening a support incident with Microsoft.
-![More details from a Conditional Access interrupted web browser sign-in.](./media/troubleshoot-conditional-access/image2.png)
+![Screenshot showing more details from a Conditional Access interrupted web browser sign-in.](./media/troubleshoot-conditional-access/image2.png)
To find out which Conditional Access policy or policies applied and why do the following.
To find out which Conditional Access policy or policies applied and why do the f
1. **Username** to see information related to specific users. 1. **Date** scoped to the time frame in question.
- ![Selecting the Conditional access filter in the sign-ins log](./media/troubleshoot-conditional-access/image3.png)
+ ![Screenshot showing selecting the Conditional access filter in the sign-ins log.](./media/troubleshoot-conditional-access/image3.png)
-1. Once the sign-in event that corresponds to the user's sign-in failure has been found select the **Conditional Access** tab. The Conditional Access tab will show the specific policy or policies that resulted in the sign-in interruption.
+1. Once the sign-in event that corresponds to the user's sign-in failure has been found select the **Conditional Access** tab. The Conditional Access tab shows the specific policy or policies that resulted in the sign-in interruption.
1. Information in the **Troubleshooting and support** tab may provide a clear reason as to why a sign-in failed such as a device that didn't meet compliance requirements.
- 1. To investigate further, drill down into the configuration of the policies by clicking on the **Policy Name**. Clicking the **Policy Name** will show the policy configuration user interface for the selected policy for review and editing.
+ 1. To investigate further, drill down into the configuration of the policies by clicking on the **Policy Name**. Clicking the **Policy Name** shows the policy configuration user interface for the selected policy for review and editing.
1. The **client user** and **device details** that were used for the Conditional Access policy assessment are also available in the **Basic Info**, **Location**, **Device Info**, **Authentication Details**, and **Additional Details** tabs of the sign-in event. ### Policy not working as intended Selecting the ellipsis on the right side of the policy in a sign-in event brings up policy details. This option gives administrators additional information about why a policy was successfully applied or not.
- ![Sign in event Conditional Access tab](./media/troubleshoot-conditional-access/image5.png)
-
- ![Policy details (preview)](./media/troubleshoot-conditional-access/policy-details.png)
The left side provides details collected at sign-in and the right side provides details of whether those details satisfy the requirements of the applied Conditional Access policies. Conditional Access policies only apply when all conditions are satisfied or not configured. If the information in the event isn't enough to understand the sign-in results, or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md). You can also [use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md).
-If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the specific event you're concerned about.
+If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information allows Microsoft support to find the specific event you're concerned about.
### Common Conditional Access error codes
More information about error codes can be found in the article [Azure AD Authent
## Service dependencies
-In some specific scenarios, users are blocked because there are cloud apps with dependencies on resources that are blocked by Conditional Access policy.
+In some specific scenarios, users are blocked because there are cloud apps with dependencies on resources blocked by Conditional Access policy.
To determine the service dependency, check the sign-ins log for the application and resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy.
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Previously updated : 12/28/2022 Last updated : 03/29/2023 --+ # Microsoft identity platform access tokens
-Access tokens enable clients to securely call protected web APIs. Access tokens are used by web APIs to perform authentication and authorization.
+Access tokens enable clients to securely call protected web APIs. Web APIs use access tokens to perform authentication and authorization.
-Per the OAuth specification, access tokens are opaque strings without a set format. Some identity providers (IDPs) use GUIDs and others use encrypted blobs. The format of the access token can depend on how the API that accepts the token is configured.
+Per the OAuth specification, access tokens are opaque strings without a set format. Some identity providers (IDPs) use GUIDs and others use encrypted blobs. The format of the access token can depend on the configuration of the API that accepts it.
-Custom APIs registered by developers on the Microsoft identity platform can choose from two different formats of JSON Web Tokens (JWTs) called `v1.0` and `v2.0`. Microsoft-developed APIs like Microsoft Graph or APIs in Azure have other proprietary token formats. These proprietary formats might be encrypted tokens, JWTs, or special JWT-like tokens that won't validate.
+Custom APIs registered by developers on the Microsoft identity platform can choose from two different formats of JSON Web Tokens (JWTs) called `v1.0` and `v2.0`. Microsoft-developed APIs like Microsoft Graph or APIs in Azure have other proprietary token formats. These proprietary formats that can't be validated might be encrypted tokens, JWTs, or special JWT-like.
-Clients must treat access tokens as opaque strings because the contents of the token are intended for the API only. For validation and debugging purposes *only*, developers can decode JWTs using a site like [jwt.ms](https://jwt.ms). Tokens that are received for a Microsoft API might not always be a JWT and can't always be decoded.
+The contents of the token are intended only for the API, which means that access tokens must be treated as opaque strings. For validation and debugging purposes *only*, developers can decode JWTs using a site like [jwt.ms](https://jwt.ms). Tokens that a Microsoft API receives might not always be a JWT that can be decoded.
-For details on what's inside the access token, clients should use the token response data that's returned with the access token to the client. When the client requests an access token, the Microsoft identity platform also returns some metadata about the access token for the consumption of the application. This information includes the expiry time of the access token and the scopes for which it's valid. This data allows the application to do intelligent caching of access tokens without having to parse the access token itself.
+Clients should use the token response data that's returned with the access token for details on what's inside it. When the client requests an access token, the Microsoft identity platform also returns some metadata about the access token for the consumption of the application. This information includes the expiry time of the access token and the scopes for which it's valid. This data allows the application to do intelligent caching of access tokens without having to parse the access token itself.
See the following sections to learn how an API can validate and use the claims inside an access token.
There are two versions of access tokens available in the Microsoft identity plat
Web APIs have one of the following versions selected as a default during registration: -- v1.0 for Azure AD-only applications. The following example shows a v1.0 token (this token example won't validate because the keys have rotated prior to publication and personal information has been removed):
+- v1.0 for Azure AD-only applications. The following example shows a v1.0 token (the keys are changed and personal information is removed, which prevents token validation):
``` eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSIsImtpZCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSJ9.eyJhdWQiOiJlZjFkYTlkNC1mZjc3LTRjM2UtYTAwNS04NDBjM2Y4MzA3NDUiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9mYTE1ZDY5Mi1lOWM3LTQ0NjAtYTc0My0yOWYyOTUyMjIyOS8iLCJpYXQiOjE1MzcyMzMxMDYsIm5iZiI6MTUzNzIzMzEwNiwiZXhwIjoxNTM3MjM3MDA2LCJhY3IiOiIxIiwiYWlvIjoiQVhRQWkvOElBQUFBRm0rRS9RVEcrZ0ZuVnhMaldkdzhLKzYxQUdyU091TU1GNmViYU1qN1hPM0libUQzZkdtck95RCtOdlp5R24yVmFUL2tES1h3NE1JaHJnR1ZxNkJuOHdMWG9UMUxrSVorRnpRVmtKUFBMUU9WNEtjWHFTbENWUERTL0RpQ0RnRTIyMlRJbU12V05hRU1hVU9Uc0lHdlRRPT0iLCJhbXIiOlsid2lhIl0sImFwcGlkIjoiNzVkYmU3N2YtMTBhMy00ZTU5LTg1ZmQtOGMxMjc1NDRmMTdjIiwiYXBwaWRhY3IiOiIwIiwiZW1haWwiOiJBYmVMaUBtaWNyb3NvZnQuY29tIiwiZmFtaWx5X25hbWUiOiJMaW5jb2xuIiwiZ2l2ZW5fbmFtZSI6IkFiZSAoTVNGVCkiLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC83MmY5ODhiZi04NmYxLTQxYWYtOTFhYi0yZDdjZDAxMjIyNDcvIiwiaXBhZGRyIjoiMjIyLjIyMi4yMjIuMjIiLCJuYW1lIjoiYWJlbGkiLCJvaWQiOiIwMjIyM2I2Yi1hYTFkLTQyZDQtOWVjMC0xYjJiYjkxOTQ0MzgiLCJyaCI6IkkiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJzdWIiOiJsM19yb0lTUVUyMjJiVUxTOXlpMmswWHBxcE9pTXo1SDNaQUNvMUdlWEEiLCJ0aWQiOiJmYTE1ZDY5Mi1lOWM3LTQ0NjAtYTc0My0yOWYyOTU2ZmQ0MjkiLCJ1bmlxdWVfbmFtZSI6ImFiZWxpQG1pY3Jvc29mdC5jb20iLCJ1dGkiOiJGVnNHeFlYSTMwLVR1aWt1dVVvRkFBIiwidmVyIjoiMS4wIn0.D3H6pMUtQnoJAGq6AHd ``` -- v2.0 for applications that support consumer accounts. The following example shows a v2.0 token (this token example won't validate because the keys have rotated prior to publication and personal information has been removed):
+- v2.0 for applications that support consumer accounts. The following example shows a v2.0 token (the keys are changed and personal information is removed, which prevents token validation):
``` eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSJ9.eyJhdWQiOiI2ZTc0MTcyYi1iZTU2LTQ4NDMtOWZmNC1lNjZhMzliYjEyZTMiLCJpc3MiOiJodHRwczovL2xvZ2luLm1pY3Jvc29mdG9ubGluZS5jb20vNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3L3YyLjAiLCJpYXQiOjE1MzcyMzEwNDgsIm5iZiI6MTUzNzIzMTA0OCwiZXhwIjoxNTM3MjM0OTQ4LCJhaW8iOiJBWFFBaS84SUFBQUF0QWFaTG8zQ2hNaWY2S09udHRSQjdlQnE0L0RjY1F6amNKR3hQWXkvQzNqRGFOR3hYZDZ3TklJVkdSZ2hOUm53SjFsT2NBbk5aY2p2a295ckZ4Q3R0djMzMTQwUmlvT0ZKNGJDQ0dWdW9DYWcxdU9UVDIyMjIyZ0h3TFBZUS91Zjc5UVgrMEtJaWpkcm1wNjlSY3R6bVE9PSIsImF6cCI6IjZlNzQxNzJiLWJlNTYtNDg0My05ZmY0LWU2NmEzOWJiMTJlMyIsImF6cGFjciI6IjAiLCJuYW1lIjoiQWJlIExpbmNvbG4iLCJvaWQiOiI2OTAyMjJiZS1mZjFhLTRkNTYtYWJkMS03ZTRmN2QzOGU0NzQiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhYmVsaUBtaWNyb3NvZnQuY29tIiwicmgiOiJJIiwic2NwIjoiYWNjZXNzX2FzX3VzZXIiLCJzdWIiOiJIS1pwZmFIeVdhZGVPb3VZbGl0anJJLUtmZlRtMjIyWDVyclYzeERxZktRIiwidGlkIjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3IiwidXRpIjoiZnFpQnFYTFBqMGVRYTgyUy1JWUZBQSIsInZlciI6IjIuMCJ9.pj4N-w_3Us9DrBLfpCt ```
-The version can be set for applications by providing the appropriate value to the `accessTokenAcceptedVersion` setting in the [app manifest](reference-app-manifest.md#manifest-reference). The values of `null` and `1` result in v1.0 tokens, and the value of `2` results in v2.0 tokens.
+Set the version for applications by providing the appropriate value to the `accessTokenAcceptedVersion` setting in the [app manifest](reference-app-manifest.md#manifest-reference). The values of `null` and `1` result in v1.0 tokens, and the value of `2` results in v2.0 tokens.
## Token ownership
-Two parties are involved in an access token request: the client, who requests the token, and the resource (Web API) that accepts the token. The `aud` claim in a token indicates the resource that the token is intended for (its *audience*). Clients use the token but shouldn't understand or attempt to parse it. Resources accept the token.
+An access token request involves two parties: the client, who requests the token, and the resource (Web API) that accepts the token. The resource that the token is intended for (its *audience*) is defined in the `aud` claim in a token. Clients use the token but shouldn't understand or attempt to parse it. Resources accept the token.
-The Microsoft identity platform supports issuing any token version from any version endpoint - they aren't related. When `accessTokenAcceptedVersion` is set to `2`, a client calling the v1.0 endpoint to get a token for that resource receives a v2.0 access token.
+The Microsoft identity platform supports issuing any token version from any version endpoint. For example, when the value of `accessTokenAcceptedVersion` is `2`, a client calling the v1.0 endpoint to get a token for that resource receives a v2.0 access token.
Resources always own their tokens using the `aud` claim and are the only applications that can change their token details. ## Claims in access tokens
-JWTs are split into three pieces:
+JWTs contain the following pieces:
-- **Header** - Provides information about how to validate the token including information about the type of token and how it was signed.
+- **Header** - Provides information about how to validate the token including information about the type of token and its signing method.
- **Payload** - Contains all of the important data about the user or application that's attempting to call the service. - **Signature** - Is the raw material used to validate the token.
Each piece is separated by a period (`.`) and separately Base64 encoded.
Claims are present only if a value exists to fill it. An application shouldn't take a dependency on a claim being present. Examples include `pwd_exp` (not every tenant requires passwords to expire) and `family_name` ([client credential](v2-oauth2-client-creds-grant-flow.md) flows are on behalf of applications that don't have names). Claims used for access token validation are always present.
-Some claims are used to help the Microsoft identity platform secure tokens for reuse. These claims are marked as not being for public consumption in the description as `Opaque`. These claims may or may not appear in a token, and new ones may be added without notice.
+The Microsoft identity platform uses some claims to help secure tokens for reuse. The description of `Opaque` marks these claims as not being for public consumption. These claims may or may not appear in a token, and new ones may be added without notice.
### Header claims | Claim | Format | Description | |-|--|-| | `typ` | String - always `JWT` | Indicates that the token is a JWT.|
-| `alg` | String | Indicates the algorithm that was used to sign the token, for example, `RS256`. |
-| `kid` | String | Specifies the thumbprint for the public key that can be used to validate this signature of the token. Emitted in both v1.0 and v2.0 access tokens. |
+| `alg` | String | Indicates the algorithm used to sign the token, for example, `RS256`. |
+| `kid` | String | Specifies the thumbprint for the public key used for validating the signature of the token. Emitted in both v1.0 and v2.0 access tokens. |
| `x5t` | String | Functions the same (in use and value) as `kid`. `x5t` and is a legacy claim emitted only in v1.0 access tokens for compatibility purposes. | ### Payload claims
-| Claim | Format | Description | Authorization considerations |
-|-|--|-||
-| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. | This value must be validated, reject the token if the value doesn't match the intended audience. |
-| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. | The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
-|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. If the claim isn't present, the value of `iss` can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. | |
-| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. | |
-| `nbf` | int, a Unix timestamp | Specifies the time before which the JWT must not be accepted for processing. | |
-| `exp` | int, a Unix timestamp | Specifies the expiration time on or after which the JWT must not be accepted for processing. A resource may reject the token before this time as well. The rejection can occur when a change in authentication is required or a token revocation has been detected. | |
-| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. | |
-| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. | |
-| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies how the subject of the token was authenticated. | |
-| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `appid` may be used in authorization decisions. |
-| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `azp` may be used in authorization decisions. |
-| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates how the client was authenticated. For a public client, the value is `0`. If client ID and client secret are used, the value is `1`. If a client certificate was used for authentication, the value is `2`. | |
-| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates how the client was authenticated. For a public client, the value is `0`. If client ID and client secret are used, the value is `1`. If a client certificate was used for authentication, the value is `2`. | |
-| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. The value is mutable and might change over time. The value can be used for username hints, however, and in human-readable UI as a username. The `profile` scope is required in order to receive this claim. | Since this value is mutable, it must not be used to make authorization decisions. |
-| `name` | String | Provides a human-readable value that identifies the subject of the token. The value isn't guaranteed to be unique, it's mutable, and is only used for display purposes. The `profile` scope is required in order to receive this claim. | This value must not be used to make authorization decisions. |
-| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. Only included for user tokens. | The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. |
-| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. For application tokens, this set of permissions is used during the [client credential flow](v2-oauth2-client-creds-grant-flow.md) in place of user scopes. For user tokens, this set of values is populated with the roles the user was assigned to on the target application. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). This claim is configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). Setting it to `All` or `DirectoryRole` is required. May not be present in tokens obtained through the implicit flow due to token length concerns. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. The groups included in the groups claim are configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. | |
-| `groups:src1` | JSON object | For token requests that aren't length limited (see `hasgroups`) but still too large for the token, a link to the full groups list for the user is included. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` | |
-| `sub` | String | The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. The subject is a pairwise identifier that is unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Two different values may or may not be desired depending on architecture and privacy requirements. See also the `oid` claim (which does remain the same across applications within a tenant). | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
-| `oid` | String, a GUID | The immutable identifier for the requestor, which is the user or service principal whose identity has been verified. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, the `profile` scope is required in order to receive this claim for users. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. The accounts are considered different, even though the user logs into each account with the same credentials. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
-| `tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. | This value should be considered in combination with other claims in authorization decisions. |
-| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. | This value isn't guaranteed to be unique within a tenant and should be used only for display purposes. |
-| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. | |
-| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. | |
-| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. | |
+| Claim | Format | Description |
+|-|--|-|
+| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. The API must validate this value and reject the token if the value doesn't match. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. |
+| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant of the authenticated user. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
+|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. Use the value of `iss` if the claim isn't present. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. |
+| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. |
+| `nbf` | int, a Unix timestamp | Specifies the time after which the JWT can be processed. |
+| `exp` | int, a Unix timestamp | Specifies the expiration time on or after which the JWT must not be accepted for processing. A resource may reject the token before this time as well. The rejection can occur for a required change in authentication or when a token is revoked. |
+| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. |
+| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. |
+| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies the authentication method of the subject of the token. |
+| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. |
+| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. |
+| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. |
+| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates the authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. |
+| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. The value is mutable and might change over time. Since the value is mutable, don't use it to make authorization decisions. Use the value for username hints and in human-readable UI as a username. To receive this claim, use the `profile` scope. |
+| `name` | String | Provides a human-readable value that identifies the subject of the token. The value can vary, it's mutable, and is for display purposes only. To receive this claim, use the `profile` scope. |
+| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. Only included for user tokens. |
+| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. The [client credential flow](v2-oauth2-client-creds-grant-flow.md) uses this set of permission in place of user scopes for application tokens. For user tokens, this set of values contains the assigned roles of the user on the target application. |
+| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures this claim on a per-application basis. Set the claim to `All` or `DirectoryRole`. May not be present in tokens obtained through the implicit flow due to token length concerns. |
+| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. Safely use these unique values for managing access, such as enforcing authorization to access a resource. The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures the groups claim on a per-application basis. A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. |
+| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. |
+| `groups:src1` | JSON object | Includes a link to the full groups list for the user when token requests are too large for the token. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` |
+| `sub` | String | The principal associated with the token. For example, the user of an application. This value is immutable, don't reassign or reuse. Use it to perform authorization checks safely, such as when using the token to access a resource, and can be used as a key in database tables. Because the subject is always present in the tokens that Azure AD issues, use this value in a general-purpose authorization system. The subject is a pairwise identifier that's unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Using the two different values depends on architecture and privacy requirements. See also the `oid` claim, which does remain the same across applications within a tenant. |
+| `oid` | String, a GUID | The immutable identifier for the requestor, which is the verified identity of the user or service principal. Use this value to also perform authorization checks safely and as a key in database tables. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, to receive this claim for users use the `profile` scope. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. Even though the user logs into each account with the same credentials, the accounts are different. |
+|`tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. |
+| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. This value can be different within a tenant and use it only for display purposes. |
+| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. |
+| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. |
+| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. |
#### Groups overage claim
Use the `BulkCreateGroups.ps1` provided in the [App Creation Scripts](https://gi
#### v1.0 basic claims
-The following claims are included in v1.0 tokens if applicable, but aren't included in v2.0 tokens by default. To use these claims for v2.0, the application requests them using [optional claims](active-directory-optional-claims.md).
+The v1.0 tokens include the following claims if applicable, but not v2.0 tokens by default. To use these claims for v2.0, the application requests them using [optional claims](active-directory-optional-claims.md).
| Claim | Format | Description | |-|--|-| | `ipaddr`| String | The IP address the user authenticated from. | | `onprem_sid`| String, in [SID format](/windows/desktop/SecAuthZ/sid-components) | In cases where the user has an on-premises authentication, this claim provides their SID. Use this claim for authorization in legacy applications. | | `pwd_exp`| int, a Unix timestamp | Indicates when the user's password expires. |
-| `pwd_url`| String | A URL where users can be sent to reset their password. |
-| `in_corp`| boolean | Signals if the client is signing in from the corporate network. If they aren't, the claim isn't included. |
+| `pwd_url`| String | A URL where users can reset their password. |
+| `in_corp`| boolean | Signals if the client is signing in from the corporate network. |
| `nickname`| String | Another name for the user, separate from first or last name.| | `family_name` | String | Provides the last name, surname, or family name of the user as defined on the user object. | | `given_name` | String | Provides the first or given name of the user, as set on the user object. |
-| `upn` | String | The username of the user. May be a phone number, email address, or unformatted string. Should only be used for display purposes and providing username hints in reauthentication scenarios. |
+| `upn` | String | The username of the user. May be a phone number, email address, or unformatted string. Only use for display purposes and providing username hints in reauthentication scenarios. |
#### amr claim
Identities can authenticate in different ways, which may be relevant to the appl
| Value | Description | |--|-| | `pwd` | Password authentication, either a user's Microsoft password or a client secret of an application. |
-| `rsa` | Authentication was based on the proof of an RSA key, for example with the [Microsoft Authenticator app](https://aka.ms/AA2kvvu). This value also indicates whether authentication was done by a self-signed JWT with a service owned X509 certificate. |
+| `rsa` | Authentication was based on the proof of an RSA key, for example with the [Microsoft Authenticator app](https://aka.ms/AA2kvvu). This value also indicates the use of a self-signed JWT with a service owned X509 certificate in authentication. |
| `otp` | One-time passcode using an email or a text message. |
-| `fed` | A federated authentication assertion (such as JWT or SAML) was used. |
+| `fed` | Indicates the use of a federated authentication assertion (such as JWT or SAML). |
| `wia` | Windows Integrated Authentication |
-| `mfa` | [Multi-factor authentication](../authentication/concept-mfa-howitworks.md) was used. When this claim is present, the other authentication methods are included. |
+| `mfa` | Indicates the use of [Multi-factor authentication](../authentication/concept-mfa-howitworks.md). Includes the other authentication methods when this claim is present. |
| `ngcmfa` | Equivalent to `mfa`, used for provisioning of certain advanced credential types. | | `wiaormfa`| The user used Windows or an MFA credential to authenticate. |
-| `none` | No authentication was done. |
+| `none` | Indicates no completed authentication. |
## Access token lifetime
-The default lifetime of an access token is variable. When issued, the default lifetime of an access token is assigned a random value ranging between 60-90 minutes (75 minutes on average). The variation improves service resilience by spreading access token demand over a time, which prevents hourly spikes in traffic to Azure AD.
+The default lifetime of an access token is variable. When issued, the Microsoft identity platform assigns a random value ranging between 60-90 minutes (75 minutes on average) as the default lifetime of an access token. The variation improves service resilience by spreading access token demand over a time, which prevents hourly spikes in traffic to Azure AD.
-Tenants that donΓÇÖt use Conditional Access have a default access token lifetime of two hours for clients such as Microsoft Teams and Microsoft 365.
+Tenants that don't use Conditional Access have a default access token lifetime of two hours for clients such as Microsoft Teams and Microsoft 365.
-The lifetime of an access token can be adjusted to control how often the client application expires the application session, and how often it requires the user to reauthenticate (either silently or interactively). To override the default access token lifetime variation, set a static default access token lifetime by using [Configurable token lifetime (CTL)](active-directory-configurable-token-lifetimes.md).
+Adjust the lifetime of an access token to control how often the client application expires the application session, and how often it requires the user to reauthenticate (either silently or interactively). To override the default access token lifetime variation, use [Configurable token lifetime (CTL)](active-directory-configurable-token-lifetimes.md).
-Default token lifetime variation is applied to organizations that have Continuous Access Evaluation (CAE) enabled. Default token lifetime variation is applied even if the organizations use CTL policies. The default token lifetime for long lived token lifetime ranges from 20 to 28 hours. When the access token expires, the client must use the refresh token to silently acquire a new refresh token and access token.
+Apply default token lifetime variation to organizations that have Continuous Access Evaluation (CAE) enabled. Apply default token lifetime variation even if the organizations use CTL policies. The default token lifetime for long lived token lifetime ranges from 20 to 28 hours. When the access token expires, the client must use the refresh token to silently acquire a new refresh token and access token.
Organizations that use [Conditional Access sign-in frequency (SIF)](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency) to enforce how frequently sign-ins occur can't override default access token lifetime variation. When organizations use SIF, the time between credential prompts for a client is the token lifetime that ranges from 60 - 90 minutes plus the sign-in frequency interval.
-Here's an example of how default token lifetime variation works with sign-in frequency. Let's say an organization sets sign-in frequency to occur every hour. The actual sign-in interval occurs anywhere between 1 hour to 2.5 hours because the token is issued with lifetime ranging from 60-90 minutes (due to token lifetime variation).
+Here's an example of how default token lifetime variation works with sign-in frequency. Let's say an organization sets sign-in frequency to occur every hour. When the token has lifetime ranging from 60-90 minutes due to token lifetime variation, the actual sign-in interval occurs anywhere between 1 hour to 2.5 hours.
-If a user with a token with a one hour lifetime performs an interactive sign-in at 59 minutes (just before the sign-in frequency being exceeded), there's no credential prompt because the sign-in is below the SIF threshold. If a new token is issued with a lifetime of 90 minutes, the user wouldn't see a credential prompt for another hour and a half. When a silent renewal attempted of the 90-minute token lifetime is made, Azure AD requires a credential prompt because the total session length has exceeded the sign-in frequency setting of 1 hour. In this example, the time difference between credential prompts due to the SIF interval and token lifetime variation would be 2.5 hours.
+If a user with a token with a one hour lifetime performs an interactive sign-in at 59 minutes, there's no credential prompt because the sign-in is below the SIF threshold. If a new token has a lifetime of 90 minutes, the user wouldn't see a credential prompt for another hour and a half. During a silent renewal attempt, Azure AD requires a credential prompt because the total session length has exceeded the sign-in frequency setting of 1 hour. In this example, the time difference between credential prompts due to the SIF interval and token lifetime variation would be 2.5 hours.
## Validate tokens
Not all applications should validate tokens. Only in specific scenarios should a
- Web APIs must validate access tokens sent to them by a client. They must only accept tokens containing their `aud` claim. - Confidential web applications like ASP.NET Core must validate ID tokens sent to them by using the user's browser in the hybrid flow, before allowing access to a user's data or establishing a session.
-If none of the above scenarios apply, the application won't benefit from validating the token, and may present a security and reliability risk if decisions are made based on the validity of the token. Public clients like native or single-page applications don't benefit from validating tokens because the application communicates directly with the IDP where SSL protection ensures the tokens are valid.
+If none of the above scenarios apply, there's no need to validate the token, and may present a security and reliability risk when basing decisions on the validity of the token. Public clients like native or single-page applications don't benefit from validating tokens because the application communicates directly with the IDP where SSL protection ensures the tokens are valid.
-APIs and web applications must only validate tokens that have an `aud` claim that matches the application. Other resources may have custom token validation rules. For example, tokens for Microsoft Graph won't validate according to these rules due to their proprietary format. Validating and accepting tokens meant for another resource is an example of the [confused deputy](https://cwe.mitre.org/data/definitions/441.html) problem.
+APIs and web applications must only validate tokens that have an `aud` claim that matches the application. Other resources may have custom token validation rules. For example, you can't validate tokens for Microsoft Graph according to these rules due to their proprietary format. Validating and accepting tokens meant for another resource is an example of the [confused deputy](https://cwe.mitre.org/data/definitions/441.html) problem.
If the application needs to validate an ID token or an access token, it should first validate the signature of the token and the issuer against the values in the OpenID discovery document. For example, the tenant-independent version of the document is located at [https://login.microsoftonline.com/common/.well-known/openid-configuration](https://login.microsoftonline.com/common/.well-known/openid-configuration).
The Azure AD middleware has built-in capabilities for validating access tokens,
### Validating the signature
-A JWT contains three segments, which are separated by the `.` character. The first segment is known as the **header**, the second as the **body**, and the third as the **signature**. The signature segment can be used to validate the authenticity of the token so that it can be trusted by the application.
+A JWT contains three segments separated by the `.` character. The first segment is the **header**, the second is the **body**, and the third is the **signature**. Use the signature segment to evaluate the authenticity of the token.
-Tokens issued by Azure AD are signed using industry standard asymmetric encryption algorithms, such as RS256. The header of the JWT contains information about the key and encryption method used to sign the token:
+Azure AD issues tokens signed using the industry standard asymmetric encryption algorithms, such as RS256. The header of the JWT contains information about the key and encryption method used to sign the token:
```json {
Tokens issued by Azure AD are signed using industry standard asymmetric encrypti
} ```
-The `alg` claim indicates the algorithm that was used to sign the token, while the `kid` claim indicates the particular public key that was used to validate the token.
+The `alg` claim indicates the algorithm used to sign the token, while the `kid` claim indicates the particular public key that was used to validate the token.
-At any given point in time, Azure AD may sign an ID token using any one of a certain set of public-private key pairs. Azure AD rotates the possible set of keys on a periodic basis, so the application should be written to handle those key changes automatically. A reasonable frequency to check for updates to the public keys used by Azure AD is every 24 hours.
+At any given point in time, Azure AD may sign an ID token using any one of a certain set of public-private key pairs. Azure AD rotates the possible set of keys on a periodic basis, so write the application to handle those key changes automatically. A reasonable frequency to check for updates to the public keys used by Azure AD is every 24 hours.
Acquire the signing key data necessary to validate the signature by using the [OpenID Connect metadata document](v2-protocols-oidc.md#fetch-the-openid-configuration-document) located at:
https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration
The following information describes the metadata document: - Is a JSON object that contains several useful pieces of information, such as the location of the various endpoints required for doing OpenID Connect authentication.-- Includes a `jwks_uri`, which gives the location of the set of public keys that correspond to the private keys used to sign tokens. The JSON Web Key (JWK) located at the `jwks_uri` contains all of the public key information in use at that particular moment in time. The JWK format is described in [RFC 7517](https://tools.ietf.org/html/rfc7517). The application can use the `kid` claim in the JWT header to select the public key, from this document, which corresponds to the private key that has been used to sign a particular token. It can then do signature validation using the correct public key and the indicated algorithm.
+- Includes a `jwks_uri`, which gives the location of the set of public keys that correspond to the private keys used to sign tokens. The JSON Web Key (JWK) located at the `jwks_uri` contains all of the public key information in use at that particular moment in time. [RFC 7517](https://tools.ietf.org/html/rfc7517) describes the JWK format. The application can use the `kid` claim in the JWT header to select the public key, from this document, which corresponds to the private key that has been used to sign a particular token. It can then do signature validation using the correct public key and the indicated algorithm.
> [!NOTE] > Use the `kid` claim to validate the token. Though v1.0 tokens contain both the `x5t` and `kid` claims, v2.0 tokens contain only the `kid` claim. Doing signature validation is outside the scope of this document. There are many open-source libraries available for helping with signature validation if necessary. However, the Microsoft identity platform has one token signing extension to the standards, which are custom signing keys.
-If the application has custom signing keys as a result of using the [claims-mapping](active-directory-claims-mapping.md) feature, append an `appid` query parameter that contains the application ID to get a `jwks_uri` that points to the signing key information of the application, which should be used for validation. For example: `https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` contains a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`.
+If the application has custom signing keys as a result of using the [claims-mapping](active-directory-claims-mapping.md) feature, append an `appid` query parameter that contains the application ID. For validation, use `jwks_uri` that points to the signing key information of the application. For example: `https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` contains a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`.
### Claims based authorization
-The business logic of an application determines how authorization should be handled. The general approach to authorization based on token claims, and which claims should be used, is described below.
+The business logic of an application determines how authorization should be handled. The general approach to authorization based on token claims, and which claims should be used, is described in the following sections.
After a token is validated with the correct `aud` claim, the token tenant, subject, actor must be authorized.
First, always check that the `tid` in a token matches the tenant ID used to stor
#### Subject
-Next, to determine if the token subject, such as the user (or app itself in the case of an app-only token), is authorized, either check for specific `sub` or `oid` claims, or check that the subject belongs to an appropriate role or group with the `roles`, `groups`, `wids` claims.
+Next, to determine if the token subject, such as the user (or app itself for an app-only token), is authorized, either check for specific `sub` or `oid` claims, or check that the subject belongs to an appropriate role or group with the `roles`, `groups`, `wids` claims.
For example, use the immutable claim values `tid` and `oid` as a combined key for application data and determining whether a user should be granted access.
The `roles`, `groups` or `wids` claims can also be used to determine if the subj
Lastly, when an app is acting for a user, this client app (the actor), must also be authorized. Use the `scp` claim (scope) to validate that the app has permission to perform an operation.
-Scopes are defined by the application, and the absence of `scp` claim means full actor permissions.
+The application defines the scopes and the absence of the `scp` claim means full actor permissions.
> [!NOTE] > An application may handle app-only tokens (requests from applications without users, such as daemon apps) and want to authorize a specific application across multiple tenants, rather than individual service principal IDs. In that case, check for an app-only token using the `idtyp` optional claim and use the `appid` claim (for v1.0 tokens) or the `azp` claim (for v2.0 tokens) along with `tid` to determine authorization based on tenant and application ID.
Scopes are defined by the application, and the absence of `scp` claim means full
## Token revocation
-Refresh tokens can be invalidated or revoked at any time, for different reasons. The reasons fall into the categories of timeouts and revocations.
+Refresh tokens are invalidated or revoked at any time, for different reasons. The reasons fall into the categories of timeouts and revocations.
### Token timeouts
-When an organization uses [token lifetime configuration](active-directory-configurable-token-lifetimes.md), the lifetime of refresh tokens can be altered. It's expected that some tokens can go without use. For example, the user doesn't open the application for three months and then the token expires. Applications can encounter scenarios where the login server rejects a refresh token due to its age.
+Organizations can use [token lifetime configuration](active-directory-configurable-token-lifetimes.md) to alter the lifetime of refresh tokens Some tokens can go without use. For example, the user doesn't open the application for three months and then the token expires. Applications can encounter scenarios where the login server rejects a refresh token due to its age.
-- MaxInactiveTime: If the refresh token hasn't been used within the time dictated by the MaxInactiveTime, the refresh token is no longer valid.-- MaxSessionAge: If MaxAgeSessionMultiFactor or MaxAgeSessionSingleFactor have been set to something other than their default (Until-revoked), then reauthentication is required after the time set in the MaxAgeSession* elapses. Examples:
+- MaxInactiveTime: Specifies the amount of time that a token can be inactive.
+- MaxSessionAge: If MaxAgeSessionMultiFactor or MaxAgeSessionSingleFactor is set to something other than their default (Until-revoked), the user must reauthenticate after the time set in the MaxAgeSession*. Examples:
- The tenant has a MaxInactiveTime of five days, and the user went on vacation for a week, and so Azure AD hasn't seen a new token request from the user in seven days. The next time the user requests a new token, they'll find their refresh token has been revoked, and they must enter their credentials again.
- - A sensitive application has a MaxAgeSessionSingleFactor of one day. If a user logs in on Monday, and on Tuesday (after 25 hours have elapsed), they'll be required to reauthenticate.
+ - A sensitive application has a MaxAgeSessionSingleFactor of one day. If a user logs in on Monday, and on Tuesday (after 25 hours have elapsed), they must reauthenticate.
### Token revocations
-Refresh tokens can be revoked by the server due to a change in credentials, or due to use or administrative action. Refresh tokens are in the classes of confidential clients and public clients.
+The server possibly revokes refresh tokens due to a change in credentials, or due to use or administrative action. Refresh tokens are in the classes of confidential clients and public clients.
| Change | Password-based cookie | Password-based token | Non-password-based cookie | Non-password-based token | Confidential client token | ||--|-||--||
For more information, see [Primary Refresh Tokens](../devices/concept-primary-re
## Next steps -- Learn about [`id_tokens` in Azure AD](id-tokens.md).-- Learn about [permission and consent](permissions-consent-overview.md).
+- Learn more about the [security tokens used in Azure AD](security-tokens.md).
active-directory Custom Claims Provider Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-claims-provider-overview.md
Previously updated : 03/13/2023 Last updated : 03/31/2023
For an example using a custom claims provider with the **token issuance start**
- Learn how to [create and register a custom claims provider](custom-extension-get-started.md) with a sample Open ID Connect application. - If you already have a custom claims provider registered, you can configure a [SAML application](custom-extension-configure-saml-app.md) to receive tokens with claims sourced from an external store.
+- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article.
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
Previously updated : 03/13/2023 Last updated : 03/31/2023
The following screenshot demonstrates how to configure the Azure HTTP trigger fu
} ```
- The code starts with reading the incoming JSON object. Azure AD sends the JSON object to your API. In this example, it reads the correlation ID value. Then, the code returns a collection of claims, including the original correlation ID, the version of your Azure Function, date of birth and custom role that is returned to Azure AD.
+ The code starts with reading the incoming JSON object. Azure AD sends the [JSON object](./custom-claims-provider-reference.md) to your API. In this example, it reads the correlation ID value. Then, the code returns a collection of claims, including the original correlation ID, the version of your Azure Function, date of birth and custom role that is returned to Azure AD.
1. From the top menu, select **Get Function Url**, and copy the URL. In the next step, the function URL will be used and referred to as `{Function_Url}`.
active-directory Migrate Python Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-python-adal-msal.md
Python Previously updated : 11/11/2019 Last updated : 03/30/2023
You can learn more about MSAL and get started with an [overview of the Microsoft
ADAL works with the Azure Active Directory (Azure AD) v1.0 endpoint. The Microsoft Authentication Library (MSAL) works with the Microsoft identity platform--formerly known as the Azure Active Directory v2.0 endpoint. The Microsoft identity platform differs from Azure AD v1.0 in that it: Supports:
- - Work and school accounts (Azure AD provisioned accounts)
- - Personal accounts (such as Outlook.com or Hotmail.com)
- - Your customers who bring their own email or social identity (such as LinkedIn, Facebook, Google) via the Azure AD B2C offering
+
+- Work and school accounts (Azure AD provisioned accounts)
+- Personal accounts (such as Outlook.com or Hotmail.com)
+- Your customers who bring their own email or social identity (such as LinkedIn, Facebook, Google) via the Azure AD B2C offering
- Is standards compatible with: - OAuth v2.0
For more information about MSAL, see [MSAL overview](./msal-overview.md).
### Scopes not resources
-ADAL Python acquires tokens for resources, but MSAL Python acquires tokens for scopes. The API surface in MSAL Python does not have resource parameter anymore. You would need to provide scopes as a list of strings that declare the desired permissions and resources that are requested. To see some example of scopes, see [Microsoft Graph's scopes](/graph/permissions-reference).
+ADAL Python acquires tokens for resources, but MSAL Python acquires tokens for scopes. The API surface in MSAL Python doesn't have resource parameter anymore. You would need to provide scopes as a list of strings that declare the desired permissions and resources that are requested. To see some example of scopes, see [Microsoft Graph's scopes](/graph/permissions-reference).
-You can add the `/.default` scope suffix to the resource to help migrate your apps from the v1.0 endpoint (ADAL) to the Microsoft identity platform (MSAL). For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource is not in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
+You can add the `/.default` scope suffix to the resource to help migrate your apps from the v1.0 endpoint (ADAL) to the Microsoft identity platform (MSAL). For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource isn't in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
For more details about the different types of scopes, refer to [Permissions and consent in the Microsoft identity platform](./v2-permissions-and-consent.md) and the [Scopes for a Web API accepting v1.0 tokens](./msal-v1-app-scopes.md) articles. ### Error handling
-Azure Active Directory Authentication Library (ADAL) for Python uses the exception `AdalError` to indicate that there's been a problem. MSAL for Python typically uses error codes, instead. For more information, see [MSAL for Python error handling](msal-error-handling-python.md).
+ADAL for Python uses the exception `AdalError` to indicate that there's been a problem. MSAL for Python typically uses error codes, instead. For more information, see [MSAL for Python error handling](msal-error-handling-python.md).
### API changes The following table lists an API in ADAL for Python, and the one to use in its place in MSAL for Python:
-| ADAL for Python API | MSAL for Python API |
-| - | - |
-| [AuthenticationContext](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext) | [PublicClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.__init__) or [ConfidentialClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.__init__) |
-| N/A | [PublicClientApplication.acquire_token_interactive()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_interactive) |
-| N/A | [ConfidentialClientApplication.initiate_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.initiate_auth_code_flow) |
-| [acquire_token_with_authorization_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_authorization_code) | [ConfidentialClientApplication.acquire_token_by_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_auth_code_flow) |
-| [acquire_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token) | [PublicClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_silent) or [ConfidentialClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_silent) |
-| [acquire_token_with_refresh_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_refresh_token) | These two helpers are intended to be used during [migration](#migrate-existing-refresh-tokens-for-msal-python) only: [PublicClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_refresh_token) or [ConfidentialClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_refresh_token) |
-| [acquire_user_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_user_code) | [initiate_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.initiate_device_flow) |
-| [acquire_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_device_code) and [cancel_request_to_get_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.cancel_request_to_get_token_with_device_code) | [acquire_token_by_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_device_flow) |
-| [acquire_token_with_username_password()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_username_password) | [acquire_token_by_username_password()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_username_password) |
-| [acquire_token_with_client_credentials()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_credentials) and [acquire_token_with_client_certificate()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_certificate) | [acquire_token_for_client()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client) |
-| N/A | [acquire_token_on_behalf_of()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) |
-| [TokenCache()](https://adal-python.readthedocs.io/en/latest/#adal.TokenCache) | [SerializableTokenCache()](https://msal-python.readthedocs.io/en/latest/#msal.SerializableTokenCache) |
-| N/A | Cache with persistence, available from [MSAL Extensions](https://github.com/marstr/original-microsoft-authentication-extensions-for-python) |
+| ADAL for Python API | MSAL for Python API |
+| -- | - |
+| [AuthenticationContext](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext) | [PublicClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.__init__) or [ConfidentialClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.__init__) |
+| N/A | [PublicClientApplication.acquire_token_interactive()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_interactive) |
+| N/A | [ConfidentialClientApplication.initiate_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.initiate_auth_code_flow) |
+| [acquire_token_with_authorization_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_authorization_code) | [ConfidentialClientApplication.acquire_token_by_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_auth_code_flow) |
+| [acquire_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token) | [PublicClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_silent) or [ConfidentialClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_silent) |
+| [acquire_token_with_refresh_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_refresh_token) | These two helpers are intended to be used during [migration](#migrate-existing-refresh-tokens-for-msal-python) only: [PublicClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_refresh_token) or [ConfidentialClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_refresh_token) |
+| [acquire_user_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_user_code) | [initiate_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.initiate_device_flow) |
+| [acquire_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_device_code) and [cancel_request_to_get_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.cancel_request_to_get_token_with_device_code) | [acquire_token_by_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_device_flow) |
+| [acquire_token_with_username_password()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_username_password) | [acquire_token_by_username_password()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_username_password) |
+| [acquire_token_with_client_credentials()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_credentials) and [acquire_token_with_client_certificate()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_certificate) | [acquire_token_for_client()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client) |
+| N/A | [acquire_token_on_behalf_of()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) |
+| [TokenCache()](https://adal-python.readthedocs.io/en/latest/#adal.TokenCache) | [SerializableTokenCache()](https://msal-python.readthedocs.io/en/latest/#msal.SerializableTokenCache) |
+| N/A | Cache with persistence, available from [MSAL Extensions](https://github.com/marstr/original-microsoft-authentication-extensions-for-python) |
## Migrate existing refresh tokens for MSAL Python
-The Microsoft Authentication Library (MSAL) abstracts the concept of refresh tokens. MSAL Python provides an in-memory token cache by default so that you don't need to store, lookup, or update refresh tokens. Users will also see fewer sign-in prompts because refresh tokens can usually be updated without user intervention. For more information about the token cache, see [Custom token cache serialization in MSAL for Python](msal-python-token-cache-serialization.md).
+MSAL abstracts the concept of refresh tokens. MSAL Python provides an in-memory token cache by default so that you don't need to store, lookup, or update refresh tokens. Users will also see fewer sign-in prompts because refresh tokens can usually be updated without user intervention. For more information about the token cache, see [Custom token cache serialization in MSAL for Python](msal-python-token-cache-serialization.md).
The following code will help you migrate your refresh tokens managed by another OAuth2 library (including but not limited to ADAL Python) to be managed by MSAL for Python. One reason for migrating those refresh tokens is to prevent existing users from needing to sign in again when you migrate your app to MSAL for Python.
active-directory Registration Config Change Token Lifetime How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-change-token-lifetime-how-to.md
This article shows how to use Azure AD PowerShell to set an access token lifetim
To set an access token lifetime policy, download the [Azure AD PowerShell Module](https://www.powershellgallery.com/packages/AzureADPreview). Run the **Connect-AzureAD -Confirm** command.
-HereΓÇÖs an example policy that requires users to authenticate more frequently in your web app. This policy sets the lifetime of the access to the service principal of your web app. Create the policy and assign it to your service principal. You also need to get the ObjectId of your service principal.
+HereΓÇÖs an example policy that requires users to authenticate less frequently in your web app. This policy sets the lifetime of the access to the service principal of your web app. Create the policy and assign it to your service principal. You also need to get the ObjectId of your service principal.
```powershell $policy = New-AzureADPolicy -Definition @('{"TokenLifetimePolicy":{"Version":1,"AccessTokenLifetime":"02:00:00"}}') -DisplayName "WebPolicyScenario" -IsOrganizationDefault $false -Type "TokenLifetimePolicy"
active-directory Scenario Protected Web Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApi(Configuration); services.Configure<JwtBearerOptions>(JwtBearerDefaults.AuthenticationScheme, options => {
- var existingOnTokenValidatedHandler = options.Events.OnTokenValidated;
- options.Events.OnTokenValidated = async context =>
- {
- await existingOnTokenValidatedHandler(context);
- // Your code to add extra configuration that will be executed after the current event implementation.
- options.TokenValidationParameters.ValidIssuers = new[] { /* list of valid issuers */ };
- options.TokenValidationParameters.ValidAudiences = new[] { /* list of valid audiences */};
- };
+ options.TokenValidationParameters.ValidAudiences = new[] { /* list of valid audiences */};
}); ```
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
As an administrator in Azure Active Directory, open PowerShell, run ``Connect-Az
```PowerShell Get-AzureADUserRegisteredDevice -ObjectId johndoe@contoso.com | Set-AzureADDevice -AccountEnabled $false ```+
+>[!NOTE]
+> For information on specific roles that can perform these steps review [Azure AD built-in roles](../roles/permissions-reference.md)
## When access is revoked Once admins have taken the above steps, the user can't gain new tokens for any application tied to Azure Active Directory. The elapsed time between revocation and the user losing their access depends on how the application is granting access:
Once admins have taken the above steps, the user can't gain new tokens for any a
- Use [Azure AD SaaS App Provisioning](../app-provisioning/user-provisioning.md). Azure AD SaaS App Provisioning typically runs automatically every 20-40 minutes. [Configure Azure AD provisioning](../saas-apps/tutorial-list.md) to deprovision or deactivate disabled users in applications.
- - For applications that don't use Azure AD SaaS App Provisioning, use [Identity Manager (MIM)](/microsoft-identity-manager/mim-how-provision-users-adds) or a 3rd party solution to automate the deprovisioning of users.
+ - For applications that don't use Azure AD SaaS App Provisioning, use [Identity Manager (MIM)](/microsoft-identity-manager/mim-how-provision-users-adds) or a third party solution to automate the deprovisioning of users.
- Identify and develop a process for applications that requires manual deprovisioning. Ensure admins can quickly run the required manual tasks to deprovision the user from these apps when needed. - [Manage your devices and applications with Microsoft Intune](/mem/intune/remote-actions/device-management). Intune-managed [devices can be reset to factory settings](/mem/intune/remote-actions/devices-wipe). If the device is unmanaged, you can [wipe the corporate data from managed apps](/mem/intune/apps/apps-selective-wipe). These processes are effective for removing potentially sensitive data from end users' devices. However, for either process to be triggered, the device must be connected to the internet. If the device is offline, the device will still have access to any locally stored data.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 03/01/2023 Last updated : 03/31/2023
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## March 2023
+
+### Updated articles
+
+- [Invite internal users to B2B collaboration](invite-internal-users.md)
+- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md)
+- [Add Azure Active Directory (Azure AD) as an identity provider for External Identities](azure-ad-account.md)
+- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)
+- [Billing model for Azure AD External Identities](external-identities-pricing.md)
+- [Tutorial: Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)
+ ## February 2023 ### Updated articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Add Facebook as an identity provider for External Identities](facebook-federation.md) - [Leave an organization as an external user](leave-the-organization.md) - [External Identities in Azure Active Directory](external-identities-overview.md)-- [External Identities documentation](index.yml)-
-## December 2022
-
-### Updated articles
--- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Azure Active Directory B2B collaboration API and customization](customize-invitation-api.md)-- [Azure Active Directory External Identities: What's new](whats-new-docs.md)-- [Auditing and reporting a B2B collaboration user](auditing-and-reporting.md)
+- [External Identities documentation](index.yml)
active-directory Customize Workflow Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md
Emails sent out using Lifecycle workflows can be customized to have your own com
- A verified domain. To add a custom domain, see: [Managing custom domain names in your Azure Active Directory](../enterprise-users/domains-manage.md) - Custom Branding set within Azure AD if you want to have your custom branding used in emails. To set organizational branding within your Azure tenant, see: [Configure your company branding (preview)](../fundamentals/how-to-customize-branding.md).
+> [!NOTE]
+> The recommendation is to use a domain that has the appropriate DNS records to facilitate email validation, like SPF, DKIM, DMARC, and MX as this then complies with the [RFC compliance](https://www.ietf.org/rfc/rfc2142.txt) for sending and receiving email. Please see [Learn more about Exchange Online Email Routing](/exchange/mail-flow-best-practices/mail-flow-best-practices) for more information.
+ After these prerequisites are satisfied, you'd follow these steps: 1. On the Lifecycle workflows page, select **Workflow settings (Preview)**.
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
To make use of workload identity risk, including the new **Risky workload identi
- Security Administrator - Security Operator - Security Reader- Users assigned the Conditional Access administrator role can create policies that use risk as a condition. ## Workload identity risk detections
We detect risk on workload identities across sign-in behavior and offline indica
| | | | | Azure AD threat intelligence | Offline | This risk detection indicates some activity that is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. | | Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. |
-| Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). |
+| Admin confirmed service principal compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). |
| Leaked Credentials | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. |
-| Malicious application | Offline | This detection indicates that Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application. Note: These applications will show `DisabledDueToViolationOfServicesAgreement` on the `disabledByMicrosoftStatus` property on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph. To prevent them from being instantiated in your organization again in the future, you cannot delete these objects. |
-| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that may be violating our terms of service, but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
+| Malicious application | Offline | This detection combines alerts from Identity Protection and Microsoft Defender for Cloud Apps to indicate when Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application. Note: These applications will show `DisabledDueToViolationOfServicesAgreement` on the `disabledByMicrosoftStatus` property on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph. To prevent them from being instantiated in your organization again in the future, you cannot delete these objects. |
+| Suspicious application | Offline | This detection indicates that Identity Protection or Microsoft Defender for Cloud Apps have identified an application that may be violating our terms of service but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
| Anomalous service principal activity | Offline | This risk detection baselines normal administrative service principal behavior in Azure AD, and spots anomalous patterns of behavior like suspicious changes to the directory. The detection is triggered against the administrative service principal making the change or the object that was changed. | ## Identify risky workload identities
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
$assignments | ForEach-Object {
1. Get the enterprise application. Filter by DisplayName. ```http
- GET servicePrincipal?$filter=DisplayName eq '{appDisplayName}'
+ GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=displayName eq '{appDisplayName}'
``` Record the following values from the response body:
$assignments | ForEach-Object {
1. Get the user by filtering by the user's principal name. Record the object ID of the user. ```http
- GET /users/{userPrincipalName}
+ GET https://graph.microsoft.com/v1.0/users/{userPrincipalName}
``` 1. Assign the user to the application. ```http
- POST /servicePrincipals/resource-servicePrincipal-id/appRoleAssignedTo
+ POST https://graph.microsoft.com/v1.0/servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo
{ "principalId": "33ad69f9-da99-4bed-acd0-3f24235cb296",
$assignments | ForEach-Object {
## Unassign users, and groups, from an application To unassign user and groups from the application, run the following query.
-1. Get the enterprise application. Filter by DisplayName.
+1. Get the enterprise application. Filter by displayName.
```http
- GET servicePrincipal?$filter=DisplayName eq '{appDisplayName}'
+ GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=displayName eq '{appDisplayName}'
``` 1. Get the list of appRoleAssignments for the application.
- ```http
- GET /servicePrincipals/{id}/appRoleAssignedTo
- ```
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{id}/appRoleAssignedTo
+ ```
1. Remove the appRoleAssignments by specifying the appRoleAssignment ID. ```http
- DELETE /servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
``` :::zone-end
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
To delete an enterprise application, you need:
Delete an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 1. To get the list of service principals in your tenant, run the following query.
-
+ # [HTTP](#tab/http)
```http GET https://graph.microsoft.com/v1.0/servicePrincipals ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/list-serviceprincipal-csharp-snippets.md)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/list-serviceprincipal-javascript-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/list-serviceprincipal-go-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/list-serviceprincipal-powershell-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/list-serviceprincipal-php-snippets.md)]
+
+
+ 1. Record the ID of the enterprise app you want to delete. 1. Delete the enterprise application.-
+
+ # [HTTP](#tab/http)
```http DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id} ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/delete-serviceprincipal-csharp-snippets.md)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/delete-serviceprincipal-javascript-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/delete-serviceprincipal-go-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/delete-serviceprincipal-powershell-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/delete-serviceprincipal-php-snippets.md)]
+
+
:::zone-end ## Next steps - [Restore a deleted enterprise application](restore-application.md)-
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
You need to consent to the following permissions:
Run the following queries to review delegated permissions granted to an application.
-1. Get Service Principal using objectID
+1. Get service principal using the object ID.
```http
- GET /servicePrincipals/{id}
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{id}
``` Example: ```http
- GET /servicePrincipals/57443554-98f5-4435-9002-852986eea510
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/00063ffc-54e9-405d-b8f3-56124728e051
``` 1. Get all delegated permissions for the service principal ```http
- GET /servicePrincipals/{id}/oauth2PermissionGrants
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{id}/oauth2PermissionGrants
``` 1. Remove delegated permissions using oAuth2PermissionGrants ID. ```http
- DELETE /oAuth2PermissionGrants/{id}
+ DELETE https://graph.microsoft.com/v1.0/oAuth2PermissionGrants/{id}
``` ### Application permissions
Run the following queries to review application permissions granted to an applic
1. Get all application permissions for the service principal ```http
- GET /servicePrincipals/{servicePrincipal-id}/appRoleAssignments
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id}/appRoleAssignments
``` 1. Remove application permissions using appRoleAssignment ID ```http
- DELETE /servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
``` ## Invalidate the refresh tokens
Run the following queries to remove appRoleAssignments of users or groups to the
1. Get Service Principal using objectID. ```http
- GET /servicePrincipals/{id}
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{id}
``` Example: ```http
- GET /servicePrincipals/57443554-98f5-4435-9002-852986eea510
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/57443554-98f5-4435-9002-852986eea510
``` 1. Get Azure AD App role assignments using objectID of the Service Principal. ```http
- GET /servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo
``` 1. Revoke refresh token for users and groups assigned to the application using appRoleAssignment ID. ```http
- DELETE /servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
``` :::zone-end
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
To recover your enterprise application with its previous configurations, first d
Get-AzureADMSDeletedDirectoryObject -Id <id> ```
-Replace id with the object ID of the service principal that you want to restore.
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Replace id with the object ID of the service principal that you want to restore.
```powershell Get-MgDirectoryDeletedItem -DirectoryObjectId <id> ```
-Replace id with the object ID of the service principal that you want to restore.
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Alternatively, if you want to get the specific enterprise application that was d
Restore-AzureADMSDeletedDirectoryObject -Id <id> ```
-Replace id with the object ID of the service principal that you want to restore.
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Replace id with the object ID of the service principal that you want to restore.
Restore-MgDirectoryObject -DirectoryObjectId <id> ```
-Replace id with the object ID of the service principal that you want to restore.
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Replace id with the object ID of the service principal that you want to restore.
1. To restore the enterprise application, run the following query:
+ # [HTTP](#tab/http)
```http POST https://graph.microsoft.com/v1.0/directory/deletedItems/{id}/restore ```
-Replace id with the object ID of the service principal that you want to restore.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/restore-directory-deleteditem-csharp-snippets.md)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/restore-directory-deleteditem-javascript-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/restore-directory-deleteditem-go-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/restore-directory-deleteditem-powershell-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/restore-directory-deleteditem-php-snippets.md)]
+
+
+
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Remove-AzureADMSDeletedDirectoryObject -Id <id>
To permanently delete a soft deleted enterprise application, run the following query in Microsoft Graph explorer
+# [HTTP](#tab/http)
```http DELETE https://graph.microsoft.com/v1.0/directory/deletedItems/{object-id} ```
+# [C#](#tab/csharp)
+
+# [JavaScript](#tab/javascript)
+
+# [Java](#tab/java)
+
+# [Go](#tab/go)
+
+# [PowerShell](#tab/powershell)
+
+# [PHP](#tab/php)
++++ :::zone-end ## Next steps
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Previously updated : 06/24/2022 Last updated : 03/31/2023 ms.tool: azure-cli, azure-powershell
In this article, we set up a virtual machine to use managed identities to connec
## Create a resource group
-Create a resource group called **mi-test**. We'll use this resource group for all resources used in this tutorial.
+Create a resource group called **mi-test**. We use this resource group for all resources used in this tutorial.
- [Create a resource group using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) - [Create a resource group using the CLI](../../azure-resource-manager/management/manage-resource-groups-cli.md#create-resource-groups)
az vm create --resource-group <MyResourceGroup> --name <myVM> --image UbuntuLTS
# [Resource Manager Template](#tab/azure-resource-manager)
-Depending on your API version, you have to take [different steps](qs-configure-template-windows-vm.md#user-assigned-managed-identity). If your apiVersion is 2018-06-01, your user-assigned managed identities are stored in the userAssignedIdentities dictionary format and the ```<identityName>``` value is the name of a variable that you define in the variables section of your template. In the variable, you point to the user assigned managed identity that you want to assign.
+Depending on your API version, you have to take [different steps](qs-configure-template-windows-vm.md#user-assigned-managed-identity). If your apiVersion is 2018-06-01, your user-assigned managed identities are stored in the userAssignedIdentities dictionary format. The ```<identityName>``` value is the name of a variable that you define in the variables section of your template. In the variable, you point to the user assigned managed identity that you want to assign.
```json "variables": {
To use the sample below, you need to have the following NuGet packages:
- Microsoft.Azure.Cosmos - Microsoft.Azure.Management.CosmosDB
-In addition to the NuGet packages above, you also need to enable **Include prerelease** and then add **Azure.ResourceManager.CosmosDB**.
+In addition to the NuGet packages above, you also need to enable **Include prerelease** and then add **Azure.ResourceManager.CosmosDB**.
```csharp using Azure.Identity;
namespace MITest
{ static async Task Main(string[] args) {
+ // Replace the placeholders with your own values
var subscriptionId = "Your subscription ID"; var resourceGroupName = "You resource group"; var accountName = "Cosmos DB Account name"; var databaseName = "mi-test"; var containerName = "container01";
+ // Authenticate to Azure using Managed Identity (system-assigned or user-assigned)
var tokenCredential = new DefaultAzureCredential();
- // create the management clientSS
- var managementClient = new CosmosDBManagementClient(subscriptionId, tokenCredential);
+ // Create the Cosmos DB management client using the subscription ID and token credential
+ var managementClient = new CosmosDBManagementClient(tokenCredential)
+ {
+ SubscriptionId = subscriptionId
+ };
- // create the data client
- var dataClient = new CosmosClient("https://[Account].documents.azure.com:443/", tokenCredential);
+ // Create the Cosmos DB data client using the account URL and token credential
+ var dataClient = new CosmosClient($"https://{accountName}.documents.azure.com:443/", tokenCredential);
- // create a new database
- var createDatabaseOperation = await managementClient.SqlResources.StartCreateUpdateSqlDatabaseAsync(resourceGroupName, accountName, databaseName,
+ // Create a new database using the management client
+ var createDatabaseOperation = await managementClient.SqlResources.StartCreateUpdateSqlDatabaseAsync(
+ resourceGroupName,
+ accountName,
+ databaseName,
new SqlDatabaseCreateUpdateParameters(new SqlDatabaseResource(databaseName), new CreateUpdateOptions())); await createDatabaseOperation.WaitForCompletionAsync();
- // create a new container
- var createContainerOperation = await managementClient.SqlResources.StartCreateUpdateSqlContainerAsync(resourceGroupName, accountName, databaseName, containerName,
+ // Create a new container using the management client
+ var createContainerOperation = await managementClient.SqlResources.StartCreateUpdateSqlContainerAsync(
+ resourceGroupName,
+ accountName,
+ databaseName,
+ containerName,
new SqlContainerCreateUpdateParameters(new SqlContainerResource(containerName), new CreateUpdateOptions())); await createContainerOperation.WaitForCompletionAsync(); -
- // create a new item
+ // Create a new item in the container using the data client
var partitionKey = "pkey"; var id = Guid.NewGuid().ToString(); await dataClient.GetContainer(databaseName, containerName) .CreateItemAsync(new { id = id, _partitionKey = partitionKey }, new PartitionKey(partitionKey)); -
- // read back the item
+ // Read back the item from the container using the data client
var pointReadResult = await dataClient.GetContainer(databaseName, containerName) .ReadItemAsync<dynamic>(id, new PartitionKey(partitionKey)); -
- // run a query
+ // Run a query to get all items from the container using the data client
await dataClient.GetContainer(databaseName, containerName) .GetItemQueryIterator<dynamic>("SELECT * FROM c") .ReadNextAsync();
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
Previously updated : 03/08/2023 Last updated : 03/31/2023
Currently, there isn't a way to delete a configuration on the **Configurations**
:::image type="content" source="./media/cross-tenant-synchronization-configure/enterprise-applications-configuration-delete.png" alt-text="Screenshot of the Enterprise applications Properties page showing how to delete a configuration." lightbox="./media/cross-tenant-synchronization-configure/enterprise-applications-configuration-delete.png":::
+#### Symptom - Users are skipped because SMS sign-in is enabled on the user
+Users are skipped from synchronization. The scoping step includes the following filter with status false: "Filter external users.alternativeSecurityIds EQUALS 'None'"
+
+**Cause**
+
+If SMS sign-in is enabled for a user, they will be skipped by the provisioning service.
+
+**Solution**
+
+Disable SMS Sign-in for the users. The script below shows how you can disable SMS Sign-in using PowerShell.
+
+```
+##### Disable SMS Sign-in options for the users
+
+#### Import module
+Install-Module Microsoft.Graph.Users.Actions
+Install-Module Microsoft.Graph.Identity.SignIns
+Import-Module Microsoft.Graph.Users.Actions
+
+Connect-MgGraph -Scopes "User.Read.All", "Group.ReadWrite.All", "UserAuthenticationMethod.Read.All","UserAuthenticationMethod.ReadWrite","UserAuthenticationMethod.ReadWrite.All"
++
+##### The value for phoneAuthenticationMethodId is 3179e48a-750b-4051-897c-87b9720928f7
+
+$phoneAuthenticationMethodId = "3179e48a-750b-4051-897c-87b9720928f7"
+
+#### Get the User Details
+
+$userId = "objectid_of_the_user_in_Azure_AD"
+
+#### validate the value for SmsSignInState
+
+$smssignin = Get-MgUserAuthenticationPhoneMethod -UserId $userId
+
+{
+ if($smssignin.SmsSignInState -eq "ready"){
+ #### Disable Sms Sign-In for the user is set to ready
+
+ Disable-MgUserAuthenticationPhoneMethodSmSign -UserId $userId -PhoneAuthenticationMethodId $phoneAuthenticationMethodId
+ Write-Host "SMS sign-in disabled for the user" -ForegroundColor Green
+ }
+ else{
+ Write-Host "SMS sign-in status not set or found for the user " -ForegroundColor Yellow
+ }
+
+}
+++
+##### End the script
+```
++ ## Next steps - [Tutorial: Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Previously updated : 03/24/2023 Last updated : 03/31/2023
Use the following table to better understand how to resolve errors that you find
|SystemForCrossDomainIdentity<br>ManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).| |SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Ensure that you either map a single-valued attribute to the property that is throwing an error, update the value in the source to be single-valued, or remove the attribute from the mappings.| + ## Error codes for cross-tenant synchronization Use the following table to better understand how to resolve errors that you find in the provisioning logs for [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md). For any error codes that are missing, provide feedback by using the link at the bottom of this page.
Use the following table to better understand how to resolve errors that you find
> | AzureDirectoryB2BManagementPolicyCheckFailure | The cross-tenant synchronization policy allowing automatic redemption failed.<br/><br/>The synchronization engine checks to ensure that the administrator of the target tenant has created an inbound cross-tenant synchronization policy allowing automatic redemption. The synchronization engine also checks if the administrator of the source tenant has enabled an outbound policy for automatic redemption. | Ensure that the automatic redemption setting has been enabled for both the source and target tenants. For more information, see [Automatic redemption setting](../multi-tenant-organizations/cross-tenant-synchronization-overview.md#automatic-redemption-setting). | > | AzureActiveDirectoryQuotaLimitExceeded | The number of objects in the tenant exceeds the directory limit.<br/><br/>Azure AD has limits for the number of objects that can be created in a tenant. | Check whether the quota can be increased. For information about the directory limits and steps to increase the quota, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). | > |InvitationCreationFailure| The Azure AD provisioning service attempted to invite the user in the target tenant. That invitation failed.| Navigate to the user settings page in Azure AD > external users > collaboration restrictions and ensure that collaboration with that tenant is enabled.|
-> |AzureActiveDirectoryInsufficientRights|When a B2B user in the target tenant has a role other than User, Helpdesk Admin, or User Account Admin, they cannot be deleted.| Please remove the role(s) on the user in the target tenant in order to successfully delete the user in the target tenant.|
+> |AzureActiveDirectoryInsufficientRights|When a B2B user in the target tenant has a role other than User, Helpdesk Admin, or User Account Admin, they cannot be deleted.| Remove the role(s) on the user in the target tenant in order to successfully delete the user in the target tenant.|
+> |AzureActiveDirectoryForbidden|External collaboration settings have blocked invitations.|Navigate to user settings and ensure that [external collaboration settings](../external-identities/external-collaboration-settings-configure.md) are permitted.|
+> |InvitationCreationFailureInvalidPropertyValue|Potential causes:<br/>* The Primary SMTP Address is an invalid value.<br/>* UserType is neither guest nor member<br/>* Group email Address is not supported | Potential solutions:<br/>* The Primary SMTP Address has an invalid value. Resolving this issue will likely require updating the mail property of the source user. For more information, see [Prepare for directory synchronization to Microsoft 365](https://aka.ms/DirectoryAttributeValidations)<br/>* Ensure that the userType property is provisioned as type guest or member. This can be fixed by checking your attribute mappings to understand how the userType attribute is mapped.<br/>* The email address address of the user matches with the email address of a group in the tenant. Update the email address for one of the two objects.|
+> |InvitationCreationFailureAmbiguousUser| The invited user has a proxy address that matches an internal user in the target tenant. The proxy address must be unique. | To resolve this error, delete the existing internal user in the target tenant or remove this user from sync scope.|
## Next steps
active-directory Easy Metrics Auth0 Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easy-metrics-auth0-connector-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Easy Metrics Auth0 Connector
+description: Learn how to configure single sign-on between Azure Active Directory and Easy Metrics Auth0 Connector.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with Easy Metrics Auth0 Connector
+
+In this article, you learn how to integrate Easy Metrics Auth0 Connector with Azure Active Directory (Azure AD). This application is a bridge between Azure AD and Auth0, federating Authentication to Microsoft Azure AD for our customers. When you integrate Easy Metrics Auth0 Connector with Azure AD, you can:
+
+* Control in Azure AD who has access to Easy Metrics Auth0 Connector.
+* Enable your users to be automatically signed-in to Easy Metrics Auth0 Connector with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Easy Metrics Auth0 Connector in a test environment. Easy Metrics Auth0 Connector supports only **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Easy Metrics Auth0 Connector, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Easy Metrics Auth0 Connector single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Easy Metrics Auth0 Connector application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Easy Metrics Auth0 Connector from the Azure AD gallery
+
+Add Easy Metrics Auth0 Connector from the Azure AD application gallery to configure single sign-on with Easy Metrics Auth0 Connector. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Easy Metrics Auth0 Connector** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the value:
+ `urn:auth0:easymetrics:ups-saml-sso`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://easymetrics.auth0.com/login/callback?connection=ups-saml-sso&organization=org_T8ro1Kth3Gleygg5`
+
+ c. In the **Sign on URL** textbox, type the URL:
+ `https://azureapp.gcp-easymetrics.com`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+## Configure Easy Metrics Auth0 Connector SSO
+
+To configure single sign-on on **Easy Metrics Auth0 Connector** side, you need to send the **Certificate (PEM)** to [Easy Metrics Auth0 Connector support team](mailto:support@easymetrics.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Easy Metrics Auth0 Connector test user
+
+In this section, you create a user called Britta Simon in Easy Metrics Auth0 Connector. Work with [Easy Metrics Auth0 Connector support team](mailto:support@easymetrics.com) to add the users in the Easy Metrics Auth0 Connector platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Easy Metrics Auth0 Connector Sign-on URL where you can initiate the login flow.
+
+* Go to Easy Metrics Auth0 Connector Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Easy Metrics Auth0 Connector tile in the My Apps, this will redirect to Easy Metrics Auth0 Connector Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Easy Metrics Auth0 Connector you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Mymobilityhq Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mymobilityhq-tutorial.md
+
+ Title: Azure Active Directory SSO integration with myMobilityHQ
+description: Learn how to configure single sign-on between Azure Active Directory and myMobilityHQ.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with myMobilityHQ
+
+In this article, you learn how to integrate myMobilityHQ with Azure Active Directory (Azure AD). myMobilityHQ is the secure portal that allows your company mobility managers to see a real-time dashboard of the status of their expatriate tax program. When you integrate myMobilityHQ with Azure AD, you can:
+
+* Control in Azure AD who has access to myMobilityHQ.
+* Enable your users to be automatically signed-in to myMobilityHQ with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for myMobilityHQ in a test environment. myMobilityHQ supports only **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with myMobilityHQ, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* myMobilityHQ single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the myMobilityHQ application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add myMobilityHQ from the Azure AD gallery
+
+Add myMobilityHQ from the Azure AD application gallery to configure single sign-on with myMobilityHQ. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **myMobilityHQ** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using one of the following patterns:
+
+ | **Identifier** |
+ ||
+ | `urn:auth0:prod:s<COMPANYNAME>` |
+ | `urn:auth0:stage:s<COMPANYNAME>` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://stage.vialto.auth0app.com/login/callback?connection=s<COMPANYNAME>` |
+ | `https://prod.vialto.auth0app.com/login/callback?connection=s<COMPANYNAME>` |
+ | `https://auth-stage.vialto.com/login/callback?connection=s<COMPANYNAME>` |
+ | `https://auth.vialto.com/login/callback?connection=s<COMPANYNAME>` |
+
+ c. In the **Sign on URL** textbox, type one of the following URLs:
+
+ | **Sign on URL** |
+ |-|
+ | `https://mymobilityhq-stage.vialto.com`|
+ | `https://mymobilityhq.vialto.com` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [myMobilityHQ support team](mailto:gbl_vialto_iam_engineering_support@vialto.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure myMobilityHQ SSO
+
+To configure single sign-on on **myMobilityHQ** side, you need to send the **App Federation Metadata Url** to [myMobilityHQ support team](mailto:gbl_vialto_iam_engineering_support@vialto.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create myMobilityHQ test user
+
+In this section, you create a user called Britta Simon in myMobilityHQ. Work with [myMobilityHQ support team](mailto:gbl_vialto_iam_engineering_support@vialto.com) to add the users in the myMobilityHQ platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to myMobilityHQ Sign-on URL where you can initiate the login flow.
+
+* Go to myMobilityHQ Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the myMobilityHQ tile in the My Apps, this will redirect to myMobilityHQ Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure myMobilityHQ you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Proofpoint Security Awareness Training Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proofpoint-security-awareness-training-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Proofpoint Security Awareness Training
+description: Learn how to configure single sign-on between Azure Active Directory and Proofpoint Security Awareness Training.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with Proofpoint Security Awareness Training
+
+In this article, you learn how to integrate Proofpoint Security Awareness Training with Azure Active Directory (Azure AD). This application allows Azure AD to act as SAML IdP for authenticating users to Proofpoint Security Awareness Training. When you integrate Proofpoint Security Awareness Training with Azure AD, you can:
+
+* Control in Azure AD who has access to Proofpoint Security Awareness Training.
+* Enable your users to be automatically signed-in to Proofpoint Security Awareness Training with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Proofpoint Security Awareness Training in a test environment. Proofpoint Security Awareness Training supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Proofpoint Security Awareness Training, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Proofpoint Security Awareness Training single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Proofpoint Security Awareness Training application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Proofpoint Security Awareness Training from the Azure AD gallery
+
+Add Proofpoint Security Awareness Training from the Azure AD application gallery to configure single sign-on with Proofpoint Security Awareness Training. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Proofpoint Security Awareness Training** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ [ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")](common/edit-urls.png#lightbox)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>/api/auth/saml/metadata`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>/api/auth/saml/SSO`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following steps:
+
+ a. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>`
+
+ b. In the **Relay State** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>`
+
+ c. In the **Logout Url** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>/api/auth/saml/SingleLogout`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign on URL, Relay State and Logout Url. Contact [Proofpoint Security Awareness Training Client support team](mailto:wst-support@proofpoint.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ [ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")](common/copy-metadataurl.png#lightbox)
+
+## Configure Proofpoint Security Awareness Training SSO
+
+To configure single sign-on on **Proofpoint Security Awareness Training** side, you need to send the **App Federation Metadata Url** to [Proofpoint Security Awareness Training support team](mailto:wst-support@proofpoint.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Proofpoint Security Awareness Training test user
+
+In this section, a user called B.Simon is created in Proofpoint Security Awareness Training. Proofpoint Security Awareness Training supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Proofpoint Security Awareness Training, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Proofpoint Security Awareness Training Sign-on URL where you can initiate the login flow.
+
+* Go to Proofpoint Security Awareness Training Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Proofpoint Security Awareness Training for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Proofpoint Security Awareness Training tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Proofpoint Security Awareness Training for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Proofpoint Security Awareness Training you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Seattletimessso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/seattletimessso-tutorial.md
+
+ Title: Azure Active Directory SSO integration with SeattleTimesSSO
+description: Learn how to configure single sign-on between Azure Active Directory and SeattleTimesSSO.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with SeattleTimesSSO
+
+In this article, you learn how to integrate SeattleTimesSSO with Azure Active Directory (Azure AD). This is the Institutional Subscription SSO for The Seattle Times. When you integrate SeattleTimesSSO with Azure AD, you can:
+
+* Control in Azure AD who has access to SeattleTimesSSO.
+* Enable your users to be automatically signed-in to SeattleTimesSSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for SeattleTimesSSO in a test environment. SeattleTimesSSO supports **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with SeattleTimesSSO, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SeattleTimesSSO single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the SeattleTimesSSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add SeattleTimesSSO from the Azure AD gallery
+
+Add SeattleTimesSSO from the Azure AD application gallery to configure single sign-on with SeattleTimesSSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **SeattleTimesSSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up SeattleTimesSSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure SeattleTimesSSO SSO
+
+To configure single sign-on on **SeattleTimesSSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [SeattleTimesSSO support team](mailto:it-hostingadmin@seattletimes.com). They set this setting to have the SAML SSO connection set properly on both sides
+
+### Create SeattleTimesSSO test user
+
+In this section, you create a user called Britta Simon in SeattleTimesSSO. Work with [SeattleTimesSSO support team](mailto:it-hostingadmin@seattletimes.com) to add the users in the SeattleTimesSSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the SeattleTimesSSO for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the SeattleTimesSSO tile in the My Apps, you should be automatically signed in to the SeattleTimesSSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure SeattleTimesSSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Vera Suite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vera-suite-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Vera Suite
+description: Learn how to configure single sign-on between Azure Active Directory and Vera Suite.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with Vera Suite
+
+In this article, you learn how to integrate Vera Suite with Azure Active Directory (Azure AD). Vera Suite helps auto dealers maintain cultures of safety, streamline operations and manage risk. Vera Suite offers dealership workforce and workplace compliance solutions for EHS, HR and F&I managers. When you integrate Vera Suite with Azure AD, you can:
+
+* Control in Azure AD who has access to Vera Suite.
+* Enable your users to be automatically signed-in to Vera Suite with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Vera Suite in a test environment. Vera Suite supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Vera Suite, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Vera Suite single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Vera Suite application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Vera Suite from the Azure AD gallery
+
+Add Vera Suite from the Azure AD application gallery to configure single sign-on with Vera Suite. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Vera Suite** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://logon.mykpa.com/identity/Saml2/`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://logon.mykpa.com/identity/Saml2/Acs`
+
+ c. In the **Sign on URL** textbox, type one of the following URLs:
+
+ | **Sign on URL** |
+ |-|
+ | `https://www.verasuite.com` |
+ | `https://logon.mykpa.com` |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Vera Suite SSO
+
+To configure single sign-on on **Vera Suite** side, you need to send the **App Federation Metadata Url** to [Vera Suite support team](mailto:support@kpa.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Vera Suite test user
+
+In this section, a user called B.Simon is created in Vera Suite. Vera Suite supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Vera Suite, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Vera Suite Sign-on URL where you can initiate the login flow.
+
+* Go to Vera Suite Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Vera Suite tile in the My Apps, this will redirect to Vera Suite Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Vera Suite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS) description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster Previously updated : 01/19/2023 Last updated : 03/30/2023
The CSI storage driver support on AKS allows you to natively use:
- [**Azure Blob storage**](azure-blob-csi.md) can be used to mount Blob storage (or object storage) as a file system into a container or pod. Using Blob storage enables your cluster to support applications that work with large unstructured datasets like log file data, images or documents, HPC, and others. Additionally, if you ingest data into [Azure Data Lake storage](../storage/blobs/data-lake-storage-introduction.md), you can directly mount and use it in AKS without configuring another interim filesystem. > [!IMPORTANT]
-> Starting with Kubernetes version 1.26, in-tree persistent volume types *kubernetes.io/azure-disk* and *kubernetes.io/azure-file* are deprecated and will no longer be supported. Removing these drivers following their deprecation is not planned, however you should migrate to the corresponding CSI drivers *disks.csi.azure.com* and *file.csi.azure.com*. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-to-csi-drivers].
+> Starting with Kubernetes version 1.26, in-tree persistent volume types *kubernetes.io/azure-disk* and *kubernetes.io/azure-file* are deprecated and will no longer be supported. Removing these drivers following their deprecation is not planned, however you should migrate to the corresponding CSI drivers *disks.csi.azure.com* and *file.csi.azure.com*. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-csi-drivers].
> > *In-tree drivers* refers to the storage drivers that are part of the core Kubernetes code opposed to the CSI drivers, which are plug-ins. > [!NOTE]
+> It is recommended to delete the corresponding PersistentVolumeClaim object instead of the PersistentVolume object when deleting a CSI volume. The external provisioner in the CSI driver will react to the deletion of the PersistentVolumeClaim and based on its reclamation policy, it will issue the DeleteVolume call against the CSI volume driver commands to delete the volume. The PersistentVolume object will then be deleted.
+>
> Azure Disks CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure Disks CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
spec:
runtimeClassName: wasmtime-slight-v1 containers: - name: hello-slight
- image: ghcr.io/deislabs/containerd-wasm-shims/examples/slight-rust-hello:latest
+ image: ghcr.io/deislabs/containerd-wasm-shims/examples/slight-rust-hello:v0.3.3
command: ["/"] resources: requests:
api-management Api Management Debug Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-debug-policies.md
This article describes how to debug API Management policies using the [Azure API
* This feature is only available in the **Developer** tier of API Management. Each API Management instance supports only one concurrent debugging session.
-* This feature uses the built-in (service-level) all-access subscription for debugging. The [**Allow tracing**](api-management-howto-api-inspector.md#verify-allow-tracing-setting) setting must be enabled in this subscription.
+* This feature uses the built-in (service-level) all-access subscription (display name "Built-in all-access subscription") for debugging. The [**Allow tracing**](api-management-howto-api-inspector.md#verify-allow-tracing-setting) setting must be enabled in this subscription.
[!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)]
api-management Api Management Howto Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-autoscale.md
Previously updated : 02/02/2022 Last updated : 03/30/2023 + # Automatically scale an Azure API Management instance
-Azure API Management service instance can scale automatically based on a set of rules. This behavior can be enabled and configured through [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale) and is supported only in **Standard** and **Premium** tiers of the Azure API Management service.
+An Azure API Management service instance can scale automatically based on a set of rules. This behavior can be enabled and configured through [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale) and is currently supported only in the **Standard** and **Premium** tiers of the Azure API Management service.
The article walks through the process of configuring autoscale and suggests optimal configuration of autoscale rules. > [!NOTE]
-> API Management service in the **Consumption** tier scales automatically based on the traffic - without any additional configuration needed.
+> * In service tiers that support multiple scale units, you can also [manually scale](upgrade-and-scale.md) your API Management instance.
+> * An API Management service in the **Consumption** tier scales automatically based on the traffic - without any additional configuration needed.
## Prerequisites
To follow the steps from this article, you must:
+ Have an active Azure subscription. + Have an Azure API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
-+ Understand the concept of [Capacity of an Azure API Management instance](api-management-capacity.md).
-+ Understand [manual scaling process of an Azure API Management instance](upgrade-and-scale.md), including cost consequences.
++ Understand the concept of [capacity](api-management-capacity.md) of an API Management instance.++ Understand [manual scaling](upgrade-and-scale.md) of an API Management instance, including cost consequences. [!INCLUDE [premium-standard.md](../../includes/api-management-availability-premium-standard.md)]
To follow the steps from this article, you must:
Certain limitations and consequences of scaling decisions need to be considered before configuring autoscale behavior.
-+ The pricing tier of your API Management instance determines the [maximum number of units](upgrade-and-scale.md#upgrade-and-scale) you may scale to. The **Standard tier** can be scaled to 4 units. You can add any number of units to the **Premium** tier.
-+ The scaling process will take at least 20 minutes.
++ The pricing tier of your API Management instance determines the [maximum number of units](upgrade-and-scale.md#upgrade-and-scale) you may scale to. For example, the **Standard tier** can be scaled to 4 units. You can add any number of units to the **Premium** tier.++ The scaling process takes at least 20 minutes. + If the service is locked by another operation, the scaling request will fail and retry automatically. + If your service instance is deployed in multiple regions (locations), only units in the **Primary location** can be autoscaled with Azure Monitor autoscale. Units in other locations can only be scaled manually. + If your service instance is configured with [availability zones](zone-redundancy.md) in the **Primary location**, be aware of the number of zones when configuring autoscaling. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones.
-## Enable and configure autoscale for Azure API Management service
+## Enable and configure autoscale for an API Management instance
-Follow the steps below to configure autoscale for an Azure API Management service:
+Follow these steps to configure autoscale for an Azure API Management service:
-1. Navigate to **Monitor** instance in the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your API Management instance.
+1. In the left menu, select **Scale out (auto-scale)**, and then select **Custom autoscale**.
- ![Azure Monitor](media/api-management-howto-autoscale/01.png)
+ :::image type="content" source="media/api-management-howto-autoscale/01.png" alt-text="Screenshot of scale-out options in the portal.":::
-2. Select **Autoscale** from the menu on the left.
+1. In the **Default** scale condition, select **Scale based on a metric**, and then select **Add a rule**.
- ![Azure Monitor autoscale resource](media/api-management-howto-autoscale/02.png)
+ :::image type="content" source="media/api-management-howto-autoscale/04.png" alt-text="Screenshot of configuring the default scale condition in the portal.":::
-3. Locate your Azure API Management service based on the filters in dropdown menus.
-4. Select the desired Azure API Management service instance.
-5. In the newly opened section, click the **Enable autoscale** button.
+1. Define a new scale-out rule.
- ![Azure Monitor autoscale enable](media/api-management-howto-autoscale/03.png)
-
-6. In the **Rules** section, click **+ Add a rule**.
-
- ![Azure Monitor autoscale add rule](media/api-management-howto-autoscale/04.png)
-
-7. Define a new scale out rule.
-
- For example, a scale out rule could trigger an addition of an Azure API Management unit, when the average capacity metric over the last 30 minutes exceeds 80%. The table below provides configuration for such a rule.
+ For example, a scale-out rule could trigger addition of 1 API Management unit, when the average capacity metric over the previous 30 minutes exceeds 80%. The following table provides configuration for such a rule.
| Parameter | Value | Notes | |--|-||
- | Metric source | Current resource | Define the rule based on the current Azure API Management resource metrics. |
+ | Metric source | Current resource | Define the rule based on the current API Management resource metrics. |
| *Criteria* | | |
- | Time aggregation | Average | |
- | Metric name | Capacity | Capacity metric is an Azure API Management metric reflecting usage of resources of an Azure API Management instance. |
- | Time grain statistic | Average | |
+ | Metric name | Capacity | Capacity metric is an API Management metric reflecting usage of resources by an Azure API Management instance. |
+ | Location | Select the primary location of the API Management instance | |
| Operator | Greater than | |
- | Threshold | 80% | The threshold for the averaged capacity metric. |
- | Duration (in minutes) | 30 | The timespan to average the capacity metric over is specific to usage patterns. The longer the time period is, the smoother the reaction will be - intermittent spikes will have less effect on the scale-out decision. However, it will also delay the scale-out trigger. |
- | *Action* | | |
+ | Metric threshold | 80% | The threshold for the averaged capacity metric. |
+ | Duration (in minutes) | 30 | The timespan to average the capacity metric over is specific to usage patterns. The longer the duration, the smoother the reaction will be. Intermittent spikes will have less effect on the scale-out decision. However, it will also delay the scale-out trigger. |
+ | Time grain statistic | Average | |
+ |*Action* | | |
| Operation | Increase count by | | | Instance count | 1 | Scale out the Azure API Management instance by 1 unit. |
- | Cool down (minutes) | 60 | It takes at least 20 minutes for the Azure API Management service to scale out. In most cases, the cool down period of 60 minutes prevents from triggering many scale-outs. |
-
-8. Click **Add** to save the rule.
+ | Cool down (minutes) | 60 | It takes at least 20 minutes for the API Management service to scale out. In most cases, the cool down period of 60 minutes prevents from triggering many scale-outs. |
- ![Azure Monitor scale out rule](media/api-management-howto-autoscale/05.png)
+1. Select **Add** to save the rule.
+1. To add another rule, select **Add a rule**.
-9. Click again on **+ Add a rule**.
+ This time, a scale-in rule needs to be defined. It will ensure resources aren't being wasted, when the usage of APIs decreases.
- This time, a scale in rule needs to be defined. It will ensure resources are not being wasted, when the usage of APIs decreases.
+1. Define a new scale-in rule.
-10. Define a new scale in rule.
-
- For example, a scale in rule could trigger a removal of an Azure API Management unit, when the average capacity metric over the last 30 minutes has been lower than 35%. The table below provides configuration for such a rule.
+ For example, a scale-in rule could trigger a removal of 1 API Management unit when the average capacity metric over the previous 30 minutes has been lower than 35%. The following table provides configuration for such a rule.
| Parameter | Value | Notes | |--|-|--|
- | Metric source | Current resource | Define the rule based on the current Azure API Management resource metrics. |
+ | Metric source | Current resource | Define the rule based on the current API Management resource metrics. |
| *Criteria* | | | | Time aggregation | Average | |
- | Metric name | Capacity | Same metric as the one used for the scale out rule. |
- | Time grain statistic | Average | |
+ | Metric name | Capacity | Same metric as the one used for the scale-out rule. |
+ | Location | Select the primary location of the API Management instance | |
| Operator | Less than | |
- | Threshold | 35% | Similarly to the scale out rule, this value heavily depends on the usage patterns of the Azure API Management. |
- | Duration (in minutes) | 30 | Same value as the one used for the scale out rule. |
+ | Threshold | 35% | As with the scale-out rule, this value heavily depends on the usage patterns of the API Management instance. |
+ | Duration (in minutes) | 30 | Same value as the one used for the scale-out rule. |
+ | Time grain statistic | Average | |
| *Action* | | |
- | Operation | Decrease count by | Opposite to what was used for the scale out rule. |
- | Instance count | 1 | Same value as the one used for the scale out rule. |
- | Cool down (minutes) | 90 | Scale in should be more conservative than a scale out, so the cool down period should be longer. |
-
-11. Click **Add** to save the rule.
-
- ![Azure Monitor scale in rule](media/api-management-howto-autoscale/06.png)
-
-12. Set the **maximum** number of Azure API Management units.
+ | Operation | Decrease count by | Opposite to what was used for the scale-out rule. |
+ | Instance count | 1 | Same value as the one used for the scale-out rule. |
+ | Cool down (minutes) | 90 | Scale-in should be more conservative than a scale-out, so the cool down period should be longer. |
- > [!NOTE]
- > Azure API Management has a limit of units an instance can scale out to. The limit depends on a service tier.
+1. Select **Add** to save the rule.
- ![Screenshot that highlights where to set the maximum number of Azure API Management units.](media/api-management-howto-autoscale/07.png)
+1. In **Instance limits**, select the **Minimum**, **Maximum**, and **Default** number of API Management units.
+ > [!NOTE]
+ > API Management has a limit of units an instance can scale out to. The limit depends on the service tier.
+
+ :::image type="content" source="media/api-management-howto-autoscale/07.png" alt-text="Screenshot showing how to set instance limits in the portal.":::
-13. Click **Save**. Your autoscale has been configured.
+1. Select **Save**. Your autoscale has been configured.
## Next steps
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md
Title: How to log events to Azure Event Hubs in Azure API Management | Microsoft Docs description: Learn how to log events to Azure Event Hubs in Azure API Management. Event Hubs is a highly scalable data ingress service. - -- Previously updated : 01/29/2018+ Last updated : 03/31/2023 # How to log events to Azure Event Hubs in Azure API Management
-Azure Event Hubs is a highly scalable data ingress service that can ingest millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Event Hubs acts as the "front door" for an event pipeline, and once data is collected into an event hub, it can be transformed and stored using any real-time analytics provider or batching/storage adapters. Event Hubs decouples the production of a stream of events from the consumption of those events, so that event consumers can access the events on their own schedule.
This article describes how to log API Management events using Azure Event Hubs.
-## Create an Azure Event Hub
+Azure Event Hubs is a highly scalable data ingress service that can ingest millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Event Hubs acts as the "front door" for an event pipeline, and once data is collected into an event hub, it can be transformed and stored using any real-time analytics provider or batching/storage adapters. Event Hubs decouples the production of a stream of events from the consumption of those events, so that event consumers can access the events on their own schedule.
+
+## Prerequisites
+
+* An API Management service instance. If you don't have one, see [Create an API Management service instance](get-started-create-service-instance.md).
+* An Azure Event Hubs namespace and event hub. For detailed steps, see [Create an Event Hubs namespace and an event hub using the Azure portal](../event-hubs/event-hubs-create.md).
+ > [!NOTE]
+ > The Event Hubs resource **can be** in a different subscription or even a different tenant than the API Management resource
+
+## Configure access to the event hub
+
+To log events to the event hub, you need to configure credentials for access from API Management. API Management supports either of the two following access mechanisms:
+
+* An Event Hubs connection string
+* A managed identity for your API Management instance.
+
+### Option 1: Configure Event Hubs connection string
-For detailed steps on how to create an event hub and get connection strings that you need to send and receive events to and from the Event Hub, see [Create an Event Hubs namespace and an event hub using the Azure portal](../event-hubs/event-hubs-create.md).
+To create an Event Hubs connection string, see [Get an Event Hubs connection string](../event-hubs/event-hubs-get-connection-string.md).
+
+* You can use a connection string for the Event Hubs namespace or for the specific event hub you use for logging from API Management.
+* The shared access policy for the connection string must enable at least **Send** permissions.
+
+### Option 2: Configure API Management managed identity
> [!NOTE]
-> The Event Hub resource **can be** in a different subscription or even a different tenant than the API Management resource
+> Using an API Management managed identity for logging events to an event hub is supported in API Management REST API version `2022-04-01-preview` or later.
+
+1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md) in your API Management instance.
+
+ * If you enable a user-assigned managed identity, take note of the identity's **Client ID**.
+
+1. Assign the identity the **Azure Event Hubs Data sender** role, scoped to the Event Hubs namespace or to the event hub used for logging. To assign the role, use the [Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) or other Azure tools.
## Create an API Management logger
-Now that you have an Event Hub, the next step is to configure a [Logger](/rest/api/apimanagement/current-ga/logger) in your API Management service so that it can log events to the Event Hub.
-
-API Management loggers are configured using the [API Management REST API](/rest/api/apimanagement/ApiManagementREST/API-Management-REST). For detailed request examples, see [how to create Loggers](/rest/api/apimanagement/current-ga/logger/create-or-update).
-
-## Configure log-to-eventhub policies
-
-Once your logger is configured in API Management, you can configure your log-to-eventhub policy to log the desired events. The log-to-eventhub policy can be used in either the inbound policy section or the outbound policy section.
-
-1. Browse to your APIM instance.
-2. Select the API tab.
-3. Select the API to which you want to add the policy. In this example, we're adding a policy to the **Echo API** in the **Unlimited** product.
-4. Select **All operations**.
-5. On the top of the screen, select the Design tab.
-6. In the Inbound or Outbound processing window, click the triangle (next to the pencil).
-7. Select the Code editor. For more information, see [How to set or edit policies](set-edit-policies.md).
-8. Position your cursor in the `inbound` or `outbound` policy section.
-9. In the window on the right, select **Advanced policies** > **Log to EventHub**. This inserts the `log-to-eventhub` policy statement template.
-
-```xml
-<log-to-eventhub logger-id="logger-id">
- @{
- return new JObject(
- new JProperty("EventTime", DateTime.UtcNow.ToString()),
- new JProperty("ServiceName", context.Deployment.ServiceName),
- new JProperty("RequestId", context.RequestId),
- new JProperty("RequestIp", context.Request.IpAddress),
- new JProperty("OperationName", context.Operation.Name)
- ).ToString();
+The next step is to configure a [logger](/rest/api/apimanagement/current-ga/logger) in your API Management service so that it can log events to the event hub.
+
+Create and manage API Management loggers by using the [API Management REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) directly or by using tools including [Azure PowerShell](/powershell/module/az.apimanagement/new-azapimanagementlogger), a Bicep template, or an Azure Resource Management template.
+
+### Logger with connection string credentials
+
+For prerequisites, see [Configure Event Hubs connection string](#option-1-configure-event-hubs-connection-string).
+
+#### [PowerShell](#tab/PowerShell)
+
+The following example uses the [New-AzApiManagementLogger](/powershell/module/az.apimanagement/new-azapimanagementlogger) cmdlet to create a logger to an event hub by configuring a connection string.
+
+```powershell
+# API Management service-specific details
+$apimServiceName = "apim-hello-world"
+$resourceGroupName = "myResourceGroup"
+
+# Create logger
+$context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName
+New-AzApiManagementLogger -Context $context -LoggerId "ContosoLogger1" -Name "ApimEventHub" -ConnectionString "Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>" -Description "Event hub logger with connection string"
+```
+
+#### [Bicep](#tab/bicep)
+
+Include a snippet similar to the following in your Bicep template.
+
+```Bicep
+resource ehLoggerWithConnectionString 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+ name: 'ContosoLogger1'
+ parent: '<APIManagementInstanceName>'
+ properties: {
+ loggerType: 'azureEventHub'
+ description: 'Event hub logger with connection string'
+ credentials: {
+ connectionString: 'Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>'
+ name: 'ApimEventHub'
}
-</log-to-eventhub>
+ }
+}
```
-Replace `logger-id` with the value you used for `{loggerId}` in the request URL to create the logger in the previous step.
-You can use any expression that returns a string as the value for the `log-to-eventhub` element. In this example, a string in JSON format containing the date and time, service name, request ID, request IP address, and operation name is logged.
+#### [ARM](#tab/arm)
+
+Include a JSON snippet similar to the following in your Azure Resource Manager template.
+
+```JSON
+{
+ "type": "Microsoft.ApiManagement/service/loggers",
+ "apiVersion": "2022-04-01-preview",
+ "name": "ContosoLogger1",
+ "properties": {
+ "loggerType": "azureEventHub",
+ "description": "Event hub logger with connection string",
+ "resourceId": "<EventHubsResourceID>"
+ "credentials": {
+ "connectionString": "Endpoint=sb://<EventHubsNamespace>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>",
+ "name": "ApimEventHub"
+ },
+ }
+}
+```
++
+### Logger with system-assigned managed identity credentials
+
+For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity).
+
+#### [PowerShell](#tab/PowerShell)
-Click **Save** to save the updated policy configuration. As soon as it is saved the policy is active and events are logged to the designated Event Hub.
+Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with system-assigned managed identity credentials.
+
+#### [Bicep](#tab/bicep)
+
+Include a snippet similar to the following in your Bicep template.
+
+```Bicep
+resource ehLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+ name: 'ContosoLogger1'
+ parent: '<APIManagementInstanceName>'
+ properties: {
+ loggerType: 'azureEventHub'
+ description: 'Event hub logger with system-assigned managed identity'
+ credentials: {
+ endpointAddress: 'https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
+ identityClientId: 'systemAssigned'
+ name: 'ApimEventHub'
+ }
+ }
+}
+```
+
+#### [ARM](#tab/arm)
+
+Include a JSON snippet similar to the following in your Azure Resource Manager template.
+
+```JSON
+{
+ "type": "Microsoft.ApiManagement/service/loggers",
+ "apiVersion": "2022-04-01-preview",
+ "name": "ContosoLogger1",
+ "properties": {
+ "loggerType": "azureEventHub",
+ "description": "Event hub logger with system-assigned managed identity",
+ "resourceId": "<EventHubsResourceID>",
+ "credentials": {
+ "endpointAddress": "https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "identityClientId": "SystemAssigned",
+ "name": "ApimEventHub"
+ },
+ }
+}
+```
+
+### Logger with user-assigned managed identity credentials
+
+For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity).
+
+#### [PowerShell](#tab/PowerShell)
+
+Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with user-assigned managed identity credentials.
+
+#### [Bicep](#tab/bicep)
+
+Include a snippet similar the following in your Bicep template.
+
+```Bicep
+resource ehLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+ name: 'ContosoLogger1'
+ parent: '<APIManagementInstanceName>'
+ properties: {
+ loggerType: 'azureEventHub'
+ description: 'Event hub logger with user-assigned managed identity'
+ credentials: {
+ endpointAddress: 'https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
+ identityClientId: '<ClientID>'
+ name: 'ApimEventHub'
+ }
+ }
+}
+```
+
+#### [ARM](#tab/arm)
+
+Include a JSON snippet similar to the following in your Azure Resource Manager template.
+
+```JSON
+{
+ "type": "Microsoft.ApiManagement/service/loggers",
+ "apiVersion": "2022-04-01-preview",
+ "name": "ContosoLogger1",
+ "properties": {
+ "loggerType": "azureEventHub",
+ "description": "Event hub logger with user-assigned managed identity",
+ "resourceId": "<EventHubsResourceID>",
+ "credentials": {
+ "endpointAddress": "https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "identityClientId": "<ClientID>",
+ "name": "ApimEventHub"
+ },
+ }
+}
+```
++
+## Configure log-to-eventhub policy
+
+Once your logger is configured in API Management, you can configure your [log-to-eventhub](log-to-eventhub-policy.md) policy to log the desired events. For example, use the `log-to-eventhub` policy in the inbound policy section to log requests, or in the outbound policy section to log responses.
+
+1. Browse to your API Management instance.
+1. Select **APIs**, and then select the API to which you want to add the policy. In this example, we're adding a policy to the **Echo API** in the **Unlimited** product.
+1. Select **All operations**.
+1. On the top of the screen, select the **Design** tab.
+1. In the Inbound processing or Outbound processing window, select the `</>` (code editor) icon. For more information, see [How to set or edit policies](set-edit-policies.md).
+1. Position your cursor in the `inbound` or `outbound` policy section.
+1. In the window on the right, select **Advanced policies** > **Log to EventHub**. This inserts the `log-to-eventhub` policy statement template.
+
+ ```xml
+ <log-to-eventhub logger-id="logger-id">
+ @{
+ return new JObject(
+ new JProperty("EventTime", DateTime.UtcNow.ToString()),
+ new JProperty("ServiceName", context.Deployment.ServiceName),
+ new JProperty("RequestId", context.RequestId),
+ new JProperty("RequestIp", context.Request.IpAddress),
+ new JProperty("OperationName", context.Operation.Name)
+ ).ToString();
+ }
+ </log-to-eventhub>
+ ```
+
+ 1. Replace `logger-id` with the name of the logger that you created in the previous step.
+ 1. You can use any expression that returns a string as the value for the `log-to-eventhub` element. In this example, a string in JSON format containing the date and time, service name, request ID, request IP address, and operation name is logged.
+
+1. Select **Save** to save the updated policy configuration. As soon as it's saved, the policy is active and events are logged to the designated event hub.
> [!NOTE]
-> The maximum supported message size that can be sent to an event hub from this API Management policy is 200 kilobytes (KB). If a message that is sent to an event hub is larger than 200 KB, it will be automatically truncated, and the truncated message will be transferred to event hubs.
+> The maximum supported message size that can be sent to an event hub from this API Management policy is 200 kilobytes (KB). If a message that is sent to an event hub is larger than 200 KB, it will be automatically truncated, and the truncated message will be transferred to the event hub.
## Preview the log in Event Hubs by using Azure Stream Analytics
You can preview the log in Event Hubs by using [Azure Stream Analytics queries](
1. In the Azure portal, browse to the event hub that the logger sends events to. 2. Under **Features**, select the **Process data** tab.
-3. On the **Enable real time insights from events** card, select **Explore**.
+3. On the **Enable real time insights from events** card, select **Start**.
4. You should be able to preview the log on the **Input preview** tab. If the data shown isn't current, select **Refresh** to see the latest events. ## Next steps
You can preview the log in Event Hubs by using [Azure Stream Analytics queries](
* [Receive messages with EventProcessorHost](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) * [Event Hubs programming guide](../event-hubs/event-hubs-programming-guide.md) * Learn more about API Management and Event Hubs integration
- * [Logger entity reference](/rest/api/apimanagement/current-ga/logger)
- * [log-to-eventhub policy reference](log-to-eventhub-policy.md)
- * [Monitor your APIs with Azure API Management, Event Hubs, and Moesif](api-management-log-to-eventhub-sample.md)
+ * [Logger entity reference](/rest/api/apimanagement/current-preview/logger)
+ * [log-to-eventhub](log-to-eventhub-policy.md) policy reference
* Learn more about [integration with Azure Application Insights](api-management-howto-app-insights.md)-
-[publisher-portal]: ./media/api-management-howto-log-event-hubs/publisher-portal.png
-[create-event-hub]: ./media/api-management-howto-log-event-hubs/create-event-hub.png
-[event-hub-connection-string]: ./media/api-management-howto-log-event-hubs/event-hub-connection-string.png
-[event-hub-dashboard]: ./media/api-management-howto-log-event-hubs/event-hub-dashboard.png
-[receiving-policy]: ./media/api-management-howto-log-event-hubs/receiving-policy.png
-[sending-policy]: ./media/api-management-howto-log-event-hubs/sending-policy.png
-[event-hub-policy]: ./media/api-management-howto-log-event-hubs/event-hub-policy.png
-[add-policy]: ./media/api-management-howto-log-event-hubs/add-policy.png
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
Title: Use managed identities in Azure API Management | Microsoft Docs
-description: Learn how to create system-assigned and user-assigned identities in API Management by using the Azure portal, PowerShell, and a Resource Manager template.
+description: Learn how to create system-assigned and user-assigned identities in API Management by using the Azure portal, PowerShell, and a Resource Manager template. Learn about supported scenarios with managed identities.
documentationcenter: '' Previously updated : 04/05/2022 Last updated : 03/31/2023
API Management is a trusted Microsoft service to the following resources. This a
|Azure Service Bus | [Trusted-access-to-azure-service-bus](../service-bus-messaging/service-bus-ip-filtering.md#trusted-microsoft-services)| |Azure Event Hubs | [Trusted-access-to-azure-event-hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)|
+### Log events to an event hub
+
+You can configure and use a system-assigned managed identity to access an event hub for logging events from an API Management instance. For more information, see [How to log events to Azure Event Hubs in Azure API Management](api-management-howto-log-event-hubs.md).
+ ## Create a user-assigned managed identity > [!NOTE]
You can use a user-assigned managed identity to access Azure Key Vault to store
You can use the user-assigned identity to authenticate to a backend service through the [authentication-managed-identity](authentication-managed-identity-policy.md) policy.
+### Log events to an event hub
+
+You can configure and use a user-assigned managed identity to access an event hub for logging events from an API Management instance. For more information, see [How to log events to Azure Event Hubs in Azure API Management](api-management-howto-log-event-hubs.md).
+ ## <a name="remove"></a>Remove an identity You can remove a system-assigned identity by disabling the feature through the portal or the Azure Resource Manager template in the same way that it was created. User-assigned identities can be removed individually. To remove all identities, set the identity type to `"None"`.
api-management Upgrade And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md
Previously updated : 09/14/2022 Last updated : 03/30/2023 + # Upgrade and scale an Azure API Management instance
Customers can scale an Azure API Management instance in a dedicated service tier
[!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)] > [!NOTE]
-> API Management instances in the **Consumption** tier scale automatically based on the traffic. Currently, you cannot upgrade from or downgrade to the Consumption tier.
+> * In the **Standard** and **Premium** tiers of the API Management service, you can configure an instance to [scale automatically](api-management-howto-autoscale.md) based on a set of rules.
+> * API Management instances in the **Consumption** tier scale automatically based on the traffic. Currently, you cannot upgrade from or downgrade to the Consumption tier.
The throughput and price of each unit depend on the [service tier](api-management-features.md) in which the unit exists. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your API Management instance doesn't allow adding more units, you need to upgrade to a higher-level tier.
You can choose between four dedicated tiers: **Developer**, **Basic**, **Standa
1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/). 1. Select **Locations** from the menu. 1. Select the row with the location you want to scale.
-1. Specify the new number of **Units** - use the slider if available, or type the number.
+1. Specify the new number of **Units** - use the slider if available, or select or type the number.
1. Select **Apply**. > [!NOTE]
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
> [!NOTE] > To validate a JWT that was provided by another identity provider, API Management also provides the generic [`validate-jwt`](validate-jwt-policy.md) policy. - [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
| Attribute | Description | Required | Default | | - | | -- | |
-| tenant-id | Tenant ID or URL of the Azure Active Directory service. | Yes | N/A |
-| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | `Authorization` |
-| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| token-value | Expression returning a string containing the token. You must not return `Bearer` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. | No | 401 |
-| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
-| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
+| tenant-id | Tenant ID or URL of the Azure Active Directory service. Policy expressons are allowed.| Yes | N/A |
+| header-name | The name of the HTTP header holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | `Authorization` |
+| query-parameter-name | The name of the query parameter holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer` as part of the token value. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. Policy expressions are allowed. | No | 401 |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. Policy expressions are allowed. | No | Default error message depends on validation issue, for example "JWT not present." |
+| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation. Policy expressions aren't allowed. | No | N/A |
++
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
| Attribute | Description | Required | Default | | - | | -- | |
-| name | Name of the claim as it is expected to appear in the token. | Yes | N/A |
-| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all |
-| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
+| name | Name of the claim as it is expected to appear in the token. Policy expressions are allowed.| Yes | N/A |
+| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed.<br/><br/>Policy expressions are allowed. | No | all |
+| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. Policy expressions are allowed. | No | N/A |
## Usage
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
### Usage notes
-* This policy can only be used with an Azure Active Directory tenant in the global Azure cloud. It doesn't support tenants configured in regional clouds or Azure clouds with restricted access.
-* Currently, this policy can only validate "v1" tokens from Azure Active Directory. Support for "v2" tokens will be added in a future release.
* You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Azure AD authentication by applying the `validate-azure-ad-token` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control. ## Examples
For more details on optional claims, read [Provide optional claims to your app](
``` ## Related policies + * [API Management access restriction policies](api-management-access-restriction-policies.md) [!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]+
app-service Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-disaster-recovery.md
Title: Recover from region-wide failure
description: Learn how Azure App Service helps you maintain business continuity and disaster recovery (BCDR) capabilities. Recover your app from a region-wide failure in Azure. Previously updated : 06/09/2020 Last updated : 03/31/2023 #Customer intent: As an Azure service administrator, I want to recover my App Service app from a region-wide failure in Azure. - # Move an App Service app to another region
+> [!IMPORTANT]
+> **Beginning 31 March 2025, we'll no longer place Azure App Service web applications in disaster recovery mode in the event of a disaster in an Azure region.** We strongly encourage you to implement [commonly used disaster recovery techniques](./overview-disaster-recovery.md) to prevent loss of functionality or data for your web apps if there's a regional disaster.
+ This article describes how to bring App Service resources back online in a different Azure region during a disaster that impacts an entire Azure region. When a disaster brings an entire Azure region offline, all App Service apps hosted in that region are placed in disaster recovery mode. Features are available to help you restore the app to a different region or recover files from the impacted app. App Service resources are region-specific and can't be moved across regions. You must restore the app to a new app in a different region, and then create mirroring configurations or resources for the new app. ++ ## Prerequisites - None. [Restoring an automatic backup](manage-backup.md#restore-a-backup) usually requires **Standard** or **Premium** tier, but in disaster recovery mode, it's automatically enabled for your impacted app, regardless which tier the impacted app is in.
If you only want to recover the files from the impacted app without restoring it
![Screenshot of a FileZilla file hierarchy. The wwwroot folder is highlighted, and its shortcut menu is visible. In that menu, Download is highlighted.](media/manage-disaster-recovery/download-content.png) ## Next steps
-[Backup and restore](manage-backup.md)
+[Backup and restore](manage-backup.md)
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Learn more about migrating from Bing Maps to Azure Maps.
<!End Links--> [road tiles]: /rest/api/maps/render/getmaptile [satellite tiles]: /rest/api/maps/render/getmapimagerytile
-[Cesium]: https://www.cesium.com/?azure-portal=true
-<!--[Cesium code samples]: https://samples.azuremaps.com/?search=Cesium&azure-portal=true-->
+[Cesium]: https://www.cesium.com/
+<!--[Cesium code samples]: https://samples.azuremaps.com/?search=Cesium-->
[Cesium plugin]: /samples/azure-samples/azure-maps-cesium/azure-maps-cesium-js-plugin
-[Leaflet]: https://leafletjs.com/?azure-portal=true
-[Leaflet code samples]: https://samples.azuremaps.com/?search=leaflet&azure-portal=true
+[Leaflet]: https://leafletjs.com/
+[Leaflet code samples]: https://samples.azuremaps.com/?search=leaflet
[Leaflet plugin]: /samples/azure-samples/azure-maps-leaflet/azure-maps-leaflet-plugin
-[OpenLayers]: https://openlayers.org/?azure-portal=true
-<!--[OpenLayers code samples]: https://samples.azuremaps.com/?search=openlayers&azure-portal=true-->
-[OpenLayers plugin]: /samples/azure-samples/azure-maps-OpenLayers/azure-maps-OpenLayers-plugin?azure-portal=true
+[OpenLayers]: https://openlayers.org/
+<!--[OpenLayers code samples]: https://samples.azuremaps.com/?search=openlayers-->
+[OpenLayers plugin]: /samples/azure-samples/azure-maps-OpenLayers/azure-maps-OpenLayers-plugin
<! If developing using a JavaScript framework, one of the following open-source projects may be useful ->
-[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps?azure-portal=true
-[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components?azure-portal=true
-[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps?azure-portal=true
-[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps?azure-portal=true
+[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
+[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components
+[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
+[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
<!-- Key features support ->
-[Contour layer code samples]: https://samples.azuremaps.com/?search=contour&azure-portal=true
-[Gridded Data Source module]: https://github.com/Azure-Samples/azure-maps-gridded-data-source?azure-portal=true
-[Animation module]: https://github.com/Azure-Samples/azure-maps-animations?azure-portal=true
+[Contour layer code samples]: https://samples.azuremaps.com/?search=contour
+[Gridded Data Source module]: https://github.com/Azure-Samples/azure-maps-gridded-data-source
+[Animation module]: https://github.com/Azure-Samples/azure-maps-animations
[Spatial IO module]: how-to-use-spatial-io-module.md [open-source modules for the web SDK]: open-source-projects.md#open-web-sdk-modules
Learn more about migrating from Bing Maps to Azure Maps.
[Polygon layer options]: /javascript/api/azure-maps-control/atlas.polygonlayeroptions [Add a popup]: map-add-popup.md
-[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content&azure-portal=true
-[Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes&azure-portal=true
-[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins&azure-portal=true
+[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content
+[Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes
+[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins
[Popup class]: /javascript/api/azure-maps-control/atlas.popup [Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
Learn more about migrating from Bing Maps to Azure Maps.
[Tile layer options]: /javascript/api/azure-maps-control/atlas.tilelayeroptions [Show traffic on the map]: map-show-traffic.md
-[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options&azure-portal=true
-[Traffic control]: https://samples.azuremaps.com/?sample=traffic-controls&azure-portal=true
+[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options
+[Traffic control]: https://samples.azuremaps.com/?sample=traffic-controls
[Overlay an image]: map-add-image-layer.md [Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
Learn more about migrating from Bing Maps to Azure Maps.
[SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions [Use the drawing tools module]: set-drawing-options.md
-[Drawing tools module code samples]: https://samples.azuremaps.com?azure-portal=true#drawing-tools-module
+[Drawing tools module code samples]: https://samples.azuremaps.com#drawing-tools-module
<!>
-[free account]: https://azure.microsoft.com/free/?azure-portal=true
+[free account]: https://azure.microsoft.com/free/
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication
Learn more about migrating from Bing Maps to Azure Maps.
[atlas.data namespace]: /javascript/api/azure-maps-control/atlas.data [atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape [atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
-[turf js]: https://turfjs.org?azure-portal=true
+[turf js]: https://turfjs.org
[Azure Maps Glossary]: glossary.md [Add controls to a map]: map-add-controls.md [Localization support in Azure Maps]: supported-languages.md
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
The following table provides the Azure Maps service APIs that provide similar fu
| Traffic Incidents | [Traffic Incident Details] | | Elevations | <sup>1</sup> |
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
The following service APIs aren't currently available in Azure Maps: * Optimized Itinerary Routes - Planned. Azure Maps Route API does support traveling salesmen optimization for a single vehicle. * Imagery Metadata ΓÇô Primarily used for getting tile URLs in Bing Maps. Azure Maps has a standalone service for directly accessing map tiles.
-* Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md)
+* Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md)
Azure Maps also has these REST web
Learn more about the Azure Maps REST services.
[Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md [Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md
-[free account]: https://azure.microsoft.com/free/?azure-portal=true
+[free account]: https://azure.microsoft.com/free/
[manage authentication in Azure Maps]: how-to-manage-authentication.md [Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
Learn more about the Azure Maps REST services.
[Calculate route]: /rest/api/maps/route/getroutedirections [Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
-[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path?azure-portal=true
-[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic?azure-portal=true
+[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path
+[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
[quadtree tile pyramid math]: zoom-levels-and-tile-grid.md
-[turf js]: https://turfjs.org?azure-portal=true
-[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite?azure-portal=true
+[turf js]: https://turfjs.org
+[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite
[Map image render]: /rest/api/maps/render/getmapimagerytile [Supported map styles]: supported-map-styles.md
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
The following table provides a high-level list of Bing Maps features and the rel
| Traffic Incidents | Γ£ô | | Configuration driven maps | N/A |
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
Bing Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and highly secure, Azure Active Directory authentication.
Learn the details of how to migrate your Bing Maps application with these articl
[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md [azure.com]: https://azure.com [Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
- [Azure Maps Q&A]: /answers/topics/azure-maps.html
+[Azure Maps Q&A]: /answers/topics/azure-maps.html
[Azure support options]: https://azure.microsoft.com/support/options/ [Azure Maps product page]: https://azure.com/maps [Azure Maps product documentation]: https://aka.ms/AzureMapsDocs [Azure Maps code samples]: https://aka.ms/AzureMapsSamples [Azure Maps developer forums]: https://aka.ms/AzureMapsForums [Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
-[Azure Maps Blog]: https://aka.ms/AzureMapsBlog
+[Azure Maps Blog]: https://aka.ms/AzureMapsTechBlog
[Azure Maps Feedback (UserVoice)]: https://aka.ms/AzureMapsFeedback
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
# Tutorial: Migrate a web app from Google Maps
-Most web apps, which use Google Maps, are using the Google Maps V3 JavaScript SDK. The Azure Maps Web SDK is the suitable Azure-based SDK to migrate to. The Azure Maps Web SDK lets you customize interactive maps with your own content and imagery. You can run your app on both web or mobile applications. This control makes use of WebGL, allowing you to render large data sets with high performance. Develop with this SDK using JavaScript or TypeScript. In this tutorial, you will learn how to:
+Most web apps, which use Google Maps, are using the Google Maps V3 JavaScript SDK. The Azure Maps Web SDK is the suitable Azure-based SDK to migrate to. The Azure Maps Web SDK lets you customize interactive maps with your own content and imagery. You can run your app on both web or mobile applications. This control makes use of WebGL, allowing you to render large data sets with high performance. Develop with this SDK using JavaScript or TypeScript. This tutorial demonstrates:
> [!div class="checklist"] > * Load a map
Most web apps, which use Google Maps, are using the Google Maps V3 JavaScript SD
> * Show traffic data > * Add a ground overlay
-You will also learn:
+Also:
> [!div class="checklist"] > * How to accomplish common mapping tasks using the Azure Maps Web SDK. > * Best practices to improve performance and user experience.
-> * Tips on how to make your application using more advance features available in Azure Maps.
+> * Tips on how to make your application using more advanced features available in Azure Maps.
-If migrating an existing web application, check to see if it is using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you do not want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile)
-\| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
+If migrating an existing web application, check to see if it's using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you don't want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles]
+\| [satellite tiles]). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
-* Cesium - A 3D map control for the web. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-cesium) \| [Documentation](https://www.cesium.com/)
-* Leaflet ΓÇô Lightweight 2D map control for the web. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-leaflet) \| [Documentation](https://leafletjs.com/)
-* OpenLayers - A 2D map control for the web that supports projections. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-openlayers) \| [Documentation](https://openlayers.org/)
+* Cesium - A 3D map control for the web. [Cesium documentation].
+* Leaflet ΓÇô Lightweight 2D map control for the web. [Leaflet code sample] \| [Leaflet documentation].
+* OpenLayers - A 2D map control for the web that supports projections. [OpenLayers documentation].
If developing using a JavaScript framework, one of the following open-source projects may be useful:
-* [ng-azure-maps](https://github.com/arnaudleclerc/ng-azure-maps) - Angular 10 wrapper around Azure maps.
-* [AzureMapsControl.Components](https://github.com/arnaudleclerc/AzureMapsControl.Components) - An Azure Maps Blazor component.
-* [Azure Maps React Component](https://github.com/WiredSolutions/react-azure-maps) - A react wrapper for the Azure Maps control.
-* [Vue Azure Maps](https://github.com/rickyruiz/vue-azure-maps) - An Azure Maps component for Vue application.
+* [ng-azure-maps] - Angular 10 wrapper around Azure maps.
+* [AzureMapsControl.Components] - An Azure Maps Blazor component.
+* [Azure Maps React Component] - A react wrapper for the Azure Maps control.
+* [Vue Azure Maps] - An Azure Maps component for Vue application.
## Prerequisites
The table lists key API features in the Google Maps V3 JavaScript SDK and the su
| Distance Matrix service | Γ£ô | | Elevation service | <sup>1</sup> |
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
## Notable differences in the web SDKs The following are some key differences between the Google Maps and Azure Maps Web SDKs, to be aware of: -- In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available. Embed the Web SDK package into apps. For more information, see this [documentation](how-to-use-map-control.md). This package also includes TypeScript definitions.-- You first need to create an instance of the Map class in Azure Maps. Wait for the maps `ready` or `load` event to fire before programmatically interacting with the map. This order will ensure that all the map resources have been loaded and are ready to be accessed.-- Both platforms use a similar tiling system for the base maps. The tiles in Google Maps are 256 pixels in dimension; however, the tiles in Azure Maps are 512 pixels in dimension. To get the same map view in Azure Maps as Google Maps, subtract Google Maps zoom level by the number one in Azure Maps.-- Coordinates in Google Maps are referred to as `latitude,longitude`, while Azure Maps uses `longitude,latitude`. The Azure Maps format is aligned with the standard `[x, y]`, which is followed by most GIS platforms.-- Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [*atlas.data* namespace](/javascript/api/azure-maps-control/atlas.data). There's also the [*atlas.Shape*](/javascript/api/azure-maps-control/atlas.shape) class. Use this class to wrap GeoJSON objects, to make it easy to update and maintain the data bindable way.-- Coordinates in Azure Maps are defined as Position objects. A coordinate is specified as a number array in the format `[longitude,latitude]`. Or, it's specified using new atlas.data.Position(longitude, latitude).
+* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available. For more information on how to Embed the Web SDK package into apps, see [Use the Azure Maps map control]. This package also includes TypeScript definitions.
+* You first need to create an instance of the Map class in Azure Maps. Wait for the maps `ready` or `load` event to fire before programmatically interacting with the map. This order ensures that all the map resources have been loaded and are ready to be accessed.
+* Both platforms use a similar tiling system for the base maps. The tiles in Google Maps are 256 pixels in dimension; however, the tiles in Azure Maps are 512 pixels in dimension. To get the same map view in Azure Maps as Google Maps, subtract Google Maps zoom level by the number one in Azure Maps.
+* Coordinates in Google Maps are referred to as `latitude,longitude`, while Azure Maps uses `longitude,latitude`. The Azure Maps format is aligned with the standard `[x, y]`, which is followed by most GIS platforms.
+* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [*atlas.data* namespace]. There's also the [*atlas.Shape*] class. Use this class to wrap GeoJSON objects, to make it easy to update and maintain the data bindable way.
+* Coordinates in Azure Maps are defined as Position objects. A coordinate is specified as a number array in the format `[longitude,latitude]`. Or, it's specified using new atlas.data.Position(longitude, latitude).
> [!TIP]
- > The Position class has a static helper method for importing coordinates that are in "latitude, longitude" format. The [atlas.data.Position.fromLatLng](/javascript/api/azure-maps-control/atlas.data.position) method can often be replaced with the `new google.maps.LatLng` method in Google Maps code.
-- Rather than specifying styling information on each shape that is added to the map, Azure Maps separates styles from the data. Data is stored in a data source, and is connected to rendering layers. Azure Maps code uses data sources to render the data. This approach provides enhanced performance benefit. Additionally, many layers support data-driven styling where business logic can be added to layer style options. This support changes how individual shapes are rendered within a layer based on properties defined in the shape.
+ > The Position class has a static helper method for importing coordinates that are in "latitude, longitude" format. The [atlas.data.Position.fromLatLng] method can often be replaced with the `new google.maps.LatLng` method in Google Maps code.
+* Rather than specifying styling information on each shape that is added to the map, Azure Maps separates styles from the data. Data is stored in a data source, and is connected to rendering layers. Azure Maps code uses data sources to render the data. This approach provides enhanced performance benefit. Additionally, many layers support data-driven styling where business logic can be added to layer style options. This support changes how individual shapes are rendered within a layer based on properties defined in the shape.
## Web SDK side-by-side examples
-This collection has code samples for each platform, and each sample covers a common use case. It's intended to help you migrate your web application from Google Maps V3 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript. However, Azure Maps also provides TypeScript definitions as an additional option through an [npm module](how-to-use-map-control.md).
+This collection has code samples for each platform, and each sample covers a common use case. It's intended to help you migrate your web application from Google Maps V3 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript. However, Azure Maps also provides TypeScript definitions as another option through an [npm module].
**Topics**
-* [Load a map](#load-a-map)
-* [Localizing the map](#localizing-the-map)
-* [Setting the map view](#setting-the-map-view)
-* [Adding a marker](#adding-a-marker)
-* [Adding a custom marker](#adding-a-custom-marker)
-* [Adding a polyline](#adding-a-polyline)
-* [Adding a polygon](#adding-a-polygon)
-* [Display an info window](#display-an-info-window)
-* [Import a GeoJSON file](#import-a-geojson-file)*
-* [Marker clustering](#marker-clustering)
-* [Add a heat map](#add-a-heat-map)
-* [Overlay a tile layer](#overlay-a-tile-layer)
-* [Show traffic data](#show-traffic-data)
-* [Add a ground overlay](#add-a-ground-overlay)
-* [Add KML data to the map](#add-kml-data-to-the-map)
+* [Load a map]
+* [Localizing the map]
+* [Setting the map view]
+* [Adding a marker]
+* [Adding a custom marker]
+* [Adding a polyline]
+* [Adding a polygon]
+* [Display an info window]
+* [Import a GeoJSON file]
+* [Marker clustering]
+* [Add a heat map]
+* [Overlay a tile layer]
+* [Show traffic data]
+* [Add a ground overlay]
+* [Add KML data to the map]
### Load a map Both SDKs have the same steps to load a map: * Add a reference to the Map SDK.
-* Add a `div` tag to the body of the page, which will act as a placeholder for the map.
+* Add a `div` tag to the body of the page, which acts as a placeholder for the map.
* Create a JavaScript function that gets called when the page has loaded. * Create an instance of the respective map class.
Both SDKs have the same steps to load a map:
* Google maps requires an account key to be specified in the script reference of the API. Authentication credentials for Azure Maps are specified as options of the map class. This credential can be a subscription key or Azure Active Directory information. * Google Maps accepts a callback function in the script reference of the API, which is used to call an initialization function to load the map. With Azure Maps, the onload event of the page should be used.
-* When referencing the `div` element in which the map will be rendered, the `Map` class in Azure Maps only requires the `id` value while Google Maps requires a `HTMLElement` object.
+* When referencing the `div` element in which the map renders, the `Map` class in Azure Maps only requires the `id` value while Google Maps requires a `HTMLElement` object.
* Coordinates in Azure Maps are defined as Position objects, which can be specified as a simple number array in the format `[longitude, latitude]`. * The zoom level in Azure Maps is one level lower than the zoom level in Google Maps. This discrepancy is because the difference in the sizes of tiling system of the two platforms. * Azure Maps doesn't add any navigation controls to the map canvas. So, by default, a map doesn't have zoom buttons and map style buttons. But, there are control options for adding a map style picker, zoom buttons, compass or rotation control, and a pitch control.
-* An event handler is added in Azure Maps to monitor the `ready` event of the map instance. This event will fire when the map has finished loading the WebGL context and all the needed resources. Add any code you want to run after the map completes loading, to this event handler.
+* An event handler is added in Azure Maps to monitor the `ready` event of the map instance. This event fires when the map has finished loading the WebGL context and all the needed resources. Add any code you want to run after the map completes loading, to this event handler.
The basic examples below uses Google Maps to load a map centered over New York at coordinates. The longitude: -73.985, latitude: 40.747, and the map is at zoom level of 12.
Display a Google Map centered and zoomed over a location.
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Simple Google Maps](media/migrate-google-maps-web-app/simple-google-map.png)
Load a map with the same view in Azure Maps along with a map style control and z
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Simple Azure Maps](media/migrate-google-maps-web-app/simple-azure-maps.png)
-Find detailed documentation on how to set up and use the Azure Maps map control in a web app, by clicking [here](how-to-use-map-control.md).
+For more information on how to set up and use the Azure Maps map control in a web app, see [Use the Azure Maps map control].
> [!NOTE] > Unlike Google Maps, Azure Maps does not require an initial center and a zoom level to load the map. If this information is not provided when loading the map, Azure maps will try to determine city of the user. It will center and zoom the map there.
-**Additional resources:**
+**More resources:**
-* Azure Maps also provides navigation controls for rotating and pitching the map view, as documented [here](map-add-controls.md).
+* For more information on navigation controls for rotating and pitching the map view, see [Add controls to a map].
### Localizing the map
To localize Google Maps, add language and region parameters.
<script type="text/javascript" src=" https://maps.googleapis.com/maps/api/js?callback=initMap&key={api-Key}& language={language-code}&region={region-code}" async defer></script> ```
-Here is an example of Google Maps with the language set to "fr-FR".
+Here's an example of Google Maps with the language set to "fr-FR".
![Google Maps localization](media/migrate-google-maps-web-app/google-maps-localization.png) #### After: Azure Maps
-Azure Maps provides two different ways of setting the language and regional view of the map. The first option is to add this information to the global *atlas* namespace. It will result in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to "Auto":
+Azure Maps provides two different ways of setting the language and regional view of the map. The first option is to add this information to the global *atlas* namespace. It results in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to "Auto":
```javascript atlas.setLanguage('fr-FR');
map = new atlas.Map('myMap', {
> [!NOTE] > With Azure Maps, it is possible to load multiple map instances on the same page with different language and region settings. It is also possible to update these settings in the map after it has loaded.
-Find a detailed list of [supported languages](supported-languages.md) in Azure Maps.
+For more information on supported languages, see [Localization support in Azure Maps].
-Here is an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
+Here's an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
![Azure Maps localization](media/migrate-google-maps-web-app/azure-maps-localization.png)
map.setStyle({
![Azure Maps set view](media/migrate-google-maps-web-app/azure-maps-set-view.jpeg)
-**Additional resources:**
+**More resources:**
-* [Choose a map style](choose-map-style.md)
-* [Supported map styles](supported-map-styles.md)
+* [Choose a map style]
+* [Supported map styles]
### Adding a marker
var marker = new google.maps.Marker({
**After: Azure Maps using HTML Markers**
-In Azure Maps, use HTML markers to display a point on the map. HTML markers are recommended for apps that only need to display a small number of points on the map. To use an HTML marker, create an instance of the `atlas.HtmlMarker` class. Set the text and position options, and add the marker to the map using the `map.markers.add` method.
+In Azure Maps, use HTML markers to display a point on the map. HTML markers are recommended for apps that only need to display a few points on the map. To use an HTML marker, create an instance of the `atlas.HtmlMarker` class. Set the text and position options, and add the marker to the map using the `map.markers.add` method.
```javascript //Create a HTML marker and add it to the map.
For a Symbol layer, add the data to a data source. Attach the data source to the
![Azure Maps symbol layer](media/migrate-google-maps-web-app/azure-maps-symbol-layer.png)
-**Additional resources:**
+**More resources:**
-- [Create a data source](create-data-source-web-sdk.md)-- [Add a Symbol layer](map-add-pin.md)-- [Add a Bubble layer](map-add-bubble-layer.md)-- [Cluster point data](clustering-point-data-web-sdk.md)-- [Add HTML Markers](map-add-custom-html.md)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)-- [Symbol layer icon options](/javascript/api/azure-maps-control/atlas.iconoptions)-- [Symbol layer text option](/javascript/api/azure-maps-control/atlas.textoptions)-- [HTML marker class](/javascript/api/azure-maps-control/atlas.htmlmarker)-- [HTML marker options](/javascript/api/azure-maps-control/atlas.htmlmarkeroptions)
+* [Create a data source]
+* [Add a Symbol layer]
+* [Add a Bubble layer]
+* [Clustering point data in the Web SDK]
+* [Add HTML Markers]
+* [Use data-driven style expressions]
+* [Symbol layer icon options]
+* [Symbol layer text option]
+* [HTML marker class]
+* [HTML marker options]
### Adding a custom marker
-You may use Custom images to represent points on a map. The map below uses a custom image to display a point on the map. The point is displayed at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
+You may use Custom images to represent points on a map. The following map uses a custom image to display a point on the map. The point is displayed at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
<center>
Symbol layers in Azure Maps support custom images as well. First, load the image
> [!TIP] > To render advanced custom points, use multiple rendering layers together. For example, let's say you want to have multiple pushpins that have the same icon on different colored circles. Instead of creating a bunch of images for each color overlay, add a symbol layer on top of a bubble layer. Have the pushpins reference the same data source. This approach will be more efficient than creating and maintaining a bunch of different images.
-**Additional resources:**
+**More resources:**
-- [Create a data source](create-data-source-web-sdk.md)-- [Add a Symbol layer](map-add-pin.md)-- [Add HTML Markers](map-add-custom-html.md)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)-- [Symbol layer icon options](/javascript/api/azure-maps-control/atlas.iconoptions)-- [Symbol layer text option](/javascript/api/azure-maps-control/atlas.textoptions)-- [HTML marker class](/javascript/api/azure-maps-control/atlas.htmlmarker)-- [HTML marker options](/javascript/api/azure-maps-control/atlas.htmlmarkeroptions)
+* [Create a data source]
+* [Add a Symbol layer]
+* [Add HTML Markers]
+* [Use data-driven style expressions]
+* [Symbol layer icon options]
+* [Symbol layer text option]
+* [HTML marker class]
+* [HTML marker options]
### Adding a polyline
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
![Azure Maps polyline](media/migrate-google-maps-web-app/azure-maps-polyline.png)
-**Additional resources:**
+**More resources:**
-- [Add lines to the map](map-add-line-layer.md)-- [Line layer options](/javascript/api/azure-maps-control/atlas.linelayeroptions)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add lines to the map]
+* [Line layer options]
+* [Use data-driven style expressions]
### Adding a polygon
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
![Azure Maps polygon](media/migrate-google-maps-web-app/azure-maps-polygon.png)
-**Additional resources:**
+**More resources:**
-- [Add a polygon to the map](map-add-shape.md)-- [Add a circle to the map](map-add-shape.md#add-a-circle-to-the-map)-- [Polygon layer options](/javascript/api/azure-maps-control/atlas.polygonlayeroptions)-- [Line layer options](/javascript/api/azure-maps-control/atlas.linelayeroptions)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add a polygon to the map]
+* [Add a circle to the map]
+* [Polygon layer options]
+* [Line layer options]
+* [Use data-driven style expressions]
### Display an info window
map.events.add('click', marker, function () {
> [!NOTE] > You may do the same thing with a symbol, bubble, line or polygon layer by passing the chosen layer to the maps event code instead of a marker.
-**Additional resources:**
+**More resources:**
-- [Add a popup](map-add-popup.md)-- [Popup with Media Content](https://samples.azuremaps.com/?sample=popup-with-media-content)-- [Popups on Shapes](https://samples.azuremaps.com/?sample=popups-on-shapes)-- [Reusing Popup with Multiple Pins](https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins)-- [Popup class](/javascript/api/azure-maps-control/atlas.popup)-- [Popup options](/javascript/api/azure-maps-control/atlas.popupoptions)
+* [Add a popup]
+* [Popup with Media Content]
+* [Popups on Shapes]
+* [Reusing Popup with Multiple Pins]
+* [Popup class]
+* [Popup options]
### Import a GeoJSON file
-Google Maps supports loading and dynamically styling GeoJSON data via the `google.maps.Data` class. The functionality of this class aligns much more with the data-driven styling of Azure Maps. But, there's a key difference. With Google Maps, you specify a callback function. The business logic for styling each feature it processed individually in the UI thread. But in Azure Maps, layers support specifying data-driven expressions as styling options. These expressions are processed at render time on a separate thread. The Azure Maps approach improves rendering performance. This advantage is noticed when larger data sets need to be rendered quickly.
+Google Maps supports loading and dynamically styling GeoJSON data via the `google.maps.Data` class. The functionality of this class aligns more with the data-driven styling of Azure Maps. But, there's a key difference. With Google Maps, you specify a callback function. The business logic for styling each feature it processed individually in the UI thread. But in Azure Maps, layers support specifying data-driven expressions as styling options. These expressions are processed at render time on a separate thread. The Azure Maps approach improves rendering performance. This advantage is noticed when larger data sets need to be rendered quickly.
-The following examples load a GeoJSON feed of all earthquakes over the last seven days from the USGS. Earthquakes data renders as scaled circles on the map. The color and scale of each circle is based on the magnitude of each earthquake, which is stored in the `"mag"` property of each feature in the data set. If the magnitude is greater than or equal to five, the circle will be red. If it's greater or equal to three, but less than five, the circle will be orange. If it's less than three, the circle will be green. The radius of each circle will be the exponential of the magnitude multiplied by 0.1.
+The following examples load a GeoJSON feed of all earthquakes over the last seven days from the USGS. Earthquakes data renders as scaled circles on the map. The color and scale of each circle is based on the magnitude of each earthquake, which is stored in the `"mag"` property of each feature in the data set. If the magnitude is greater than or equal to five, the circle is red. If it's greater or equal to three, but less than five, the circle is orange. If it's less than three, the circle is green. The radius of each circle will be the exponential of the magnitude multiplied by 0.1.
#### Before: Google Maps
GeoJSON is the base data type in Azure Maps. Import it into a data source using
![Azure Maps GeoJSON](media/migrate-google-maps-web-app/azure-maps-geojson.png)
-**Additional resources:**
+**More resources:**
-* [Add a Symbol layer](map-add-pin.md)
-* [Add a Bubble layer](map-add-bubble-layer.md)
-* [Cluster point data](clustering-point-data-web-sdk.md)
-* [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add a Symbol layer]
+* [Add a Bubble layer]
+* [Clustering point data in the Web SDK]
+* [Use data-driven style expressions]
### Marker clustering
Add and manage data in a data source. Connect data sources and layers, then rend
* `clusterMaxZoom` - The maximum zoom level in which clustering occurs. If you zoom in more than this level, all points are rendered as symbols. * `clusterProperties` - Defines custom properties that are calculated using expressions against all the points within each cluster and added to the properties of each cluster point.
-When clustering is enabled, the data source will send clustered and unclustered data points to layers for rendering. The data source is capable of clustering hundreds of thousands of data points. A clustered data point has the following properties:
+When clustering is enabled, the data source sends clustered and unclustered data points to layers for rendering. The data source is capable of clustering hundreds of thousands of data points. A clustered data point has the following properties:
| Property name | Type | Description | |||| | `cluster` | boolean | Indicates if feature represents a cluster. | | `cluster_id` | string | A unique ID for the cluster that can be used with the DataSource `getClusterExpansionZoom`, `getClusterChildren`, and `getClusterLeaves` methods. | | `point_count` | number | The number of points the cluster contains. |
-| `point_count_abbreviated` | string | A string that abbreviates the `point_count` value if it is long. (for example, 4,000 becomes 4K) |
+| `point_count_abbreviated` | string | A string that abbreviates the `point_count` value if it's long. (for example, 4,000 becomes 4K) |
The `DataSource` class has the following helper function for accessing additional information about a cluster using the `cluster_id`. | Method | Return type | Description | |--|-|-|
-| `getClusterChildren(clusterId: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters will be features with properties matching ClusteredProperties. |
-| `getClusterExpansionZoom(clusterId: number)` | Promise&lt;number&gt; | Calculates a zoom level at which the cluster will start expanding or break apart. |
+| `getClusterChildren(clusterId: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters are features with properties matching ClusteredProperties. |
+| `getClusterExpansionZoom(clusterId: number)` | Promise&lt;number&gt; | Calculates a zoom level at which the cluster starts expanding or break apart. |
| `getClusterLeaves(clusterId: number, limit: number, offset: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves all points in a cluster. Set the `limit` to return a subset of the points, and use the `offset` to page through the points. |
-When rendering clustered data on the map, it's often best to use two or more layers. The following example uses three layers. A bubble layer for drawing scaled colored circles based on the size of the clusters. A symbol layer to render the cluster size as text. And, it uses a second symbol layer for rendering the unclustered points. There are many other ways to render clustered data. For more information, see the [Cluster point data](clustering-point-data-web-sdk.md) documentation.
+When rendering clustered data on the map, it's often best to use two or more layers. The following example uses three layers. A bubble layer for drawing scaled colored circles based on the size of the clusters. A symbol layer to render the cluster size as text. And, it uses a second symbol layer for rendering the unclustered points. For more information on other ways to render clustered data, see [Clustering point data in the Web SDK].
Directly import GeoJSON data using the `importDataFromUrl` function on the `DataSource` class, inside Azure Maps map.
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
![Azure Maps clustering](media/migrate-google-maps-web-app/azure-maps-clustering.png)
-**Additional resources:**
+**More resources:**
-* [Add a Symbol layer](map-add-pin.md)
-* [Add a Bubble layer](map-add-bubble-layer.md)
-* [Cluster point data](clustering-point-data-web-sdk.md)
-* [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add a Symbol layer]
+* [Add a Bubble layer]
+* [Clustering point data in the Web SDK]
+* [Use data-driven style expressions]
### Add a heat map
To create a heat map, load the "visualization" library by adding `&libraries=vis
#### After: Azure Maps
-Load the GeoJSON data into a data source and connect the data source to a heat map layer. The property that will be used for the weight can be passed into the `weight` option using an expression. Directly import GeoJSON data to Azure Maps using the `importDataFromUrl` function on the `DataSource` class.
+Load the GeoJSON data into a data source and connect the data source to a heat map layer. The property that is used for the weight can be passed into the `weight` option using an expression. Directly import GeoJSON data to Azure Maps using the `importDataFromUrl` function on the `DataSource` class.
```html <!DOCTYPE html>
Load the GeoJSON data into a data source and connect the data source to a heat m
![Azure Maps heat map](media/migrate-google-maps-web-app/azure-maps-heatmap.png)
-**Additional resources:**
+**More resources:**
-- [Add a heat map layer](map-add-heat-map-layer.md)-- [Heat map layer class](/javascript/api/azure-maps-control/atlas.layer.heatmaplayer)-- [Heat map layer options](/javascript/api/azure-maps-control/atlas.heatmaplayeroptions)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add a heat map layer]
+* [Heat map layer class]
+* [Heat map layer options]
+* [Use data-driven style expressions]
### Overlay a tile layer
map.layers.add(new atlas.layer.TileLayer({
> [!TIP] > Tile requests can be captured using the `transformRequest` option of the map. This will allow you to modify or add headers to the request if desired.
-**Additional resources:**
+**More resources:**
-- [Add tile layers](map-add-tile-layer.md)-- [Tile layer class](/javascript/api/azure-maps-control/atlas.layer.tilelayer)-- [Tile layer options](/javascript/api/azure-maps-control/atlas.tilelayeroptions)
+* [Add tile layers]
+* [Tile layer class]
+* [Tile layer options]
### Show traffic data
map.setTraffic({
![Azure Maps traffic](media/migrate-google-maps-web-app/azure-maps-traffic.png)
-If you click on one of the traffic icons in Azure Maps, additional information is displayed in a popup.
+If you select one of the traffic icons in Azure Maps, more information is displayed in a popup.
![Azure Maps traffic incident](media/migrate-google-maps-web-app/azure-maps-traffic-incident.png)
-**Additional resources:**
+**More resources:**
-* [Show traffic on the map](map-show-traffic.md)
-* [Traffic overlay options](https://samples.azuremaps.com/?sample=traffic-overlay-options)
+* [Show traffic on the map]
+* [Traffic overlay options]
### Add a ground overlay
-Both Azure and Google maps support overlaying georeferenced images on the map. Georeferenced images move and scale as you pan and zoom the map. In Google Maps, georeferenced images are known as ground overlays while in Azure Maps they're referred to as image layers. They are great for building floor plans, overlaying old maps, or imagery from a drone.
+Both Azure and Google maps support overlaying georeferenced images on the map. Georeferenced images move and scale as you pan and zoom the map. In Google Maps, georeferenced images are known as ground overlays while in Azure Maps they're referred to as image layers. They're great for building floor plans, overlaying old maps, or imagery from a drone.
#### Before: Google Maps
Specify the URL to the image you want to overlay and a bounding box to bind the
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Google Maps image overlay](media/migrate-google-maps-web-app/google-maps-image-overlay.png)
Running this code in a browser will display a map that looks like the following
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This class requires a URL to an image and a set of coordinates for the four corners of the image. The image must be hosted either on the same domain or have CORs enabled. > [!TIP]
-> If you only have north, south, east, west and rotation information, and you do not have coordinates for each corner of the image, you can use the static [`atlas.layer.ImageLayer.getCoordinatesFromEdges`](/javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-) method.
+> If you only have north, south, east, west and rotation information, and you do not have coordinates for each corner of the image, you can use the static [`atlas.layer.ImageLayer.getCoordinatesFromEdges`] method.
```html <!DOCTYPE html>
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
![Azure Maps image overlay](media/migrate-google-maps-web-app/azure-maps-image-overlay.png)
-**Additional resources:**
+**More resources:**
-- [Overlay an image](map-add-image-layer.md)-- [Image layer class](/javascript/api/azure-maps-control/atlas.layer.imagelayer)
+* [Overlay an image]
+* [Image layer class]
### Add KML data to the map
-Both Azure and Google maps can import and render KML, KMZ and GeoRSS data on the map. Azure Maps also supports GPX, GML, spatial CSV files, GeoJSON, Well Known Text (WKT), Web-Mapping Services (WMS), Web-Mapping Tile Services (WMTS), and Web Feature Services (WFS). Azure Maps reads the files locally into memory and in most cases can handle much larger KML files.
+Both Azure and Google maps can import and render KML, KMZ and GeoRSS data on the map. Azure Maps also supports GPX, GML, spatial CSV files, GeoJSON, Well Known Text (WKT), Web-Mapping Services (WMS), Web-Mapping Tile Services (WMTS), and Web Feature Services (WFS). Azure Maps reads the files locally into memory and in most cases can handle larger KML files.
#### Before: Google Maps
Both Azure and Google maps can import and render KML, KMZ and GeoRSS data on the
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Google Maps KML](media/migrate-google-maps-web-app/google-maps-kml.png) #### After: Azure Maps
-In Azure Maps, GeoJSON is the main data format used in the web SDK, additional spatial data formats can be easily integrated in using the [spatial IO module](/javascript/api/azure-maps-spatial-io/). This module has functions for both reading and writing spatial data and also includes a simple data layer which can easily render data from any of these spatial data formats. To read the data in a spatial data file, pass in a URL, or raw data as string or blob into the `atlas.io.read` function. This will return all the parsed data from the file that can then be added to the map. KML is a bit more complex than most spatial data format as it includes a lot more styling information. The `SpatialDataLayer` class supports rendering majority of these styles, however icons images have to be loaded into the map before loading the feature data, and ground overlays have to be added as layers to the map separately. When loading data via a URL, it should be hosted on a CORs enabled endpoint, or a proxy service should be passed in as an option into the read function.
+In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial data formats can be easily integrated in using the [spatial IO module]. This module has functions for both reading and writing spatial data and also includes a simple data layer that can easily render data from any of these spatial data formats. To read the data in a spatial data file, pass in a URL, or raw data as string or blob into the `atlas.io.read` function. This returns all the parsed data from the file that can then be added to the map. KML is a bit more complex than most spatial data format as it includes a lot more styling information. The `SpatialDataLayer` class supports most of these styles, however icons images have to be loaded into the map before loading the feature data, and ground overlays have to be added as layers to the map separately. When loading data via a URL, it should be hosted on a CORs enabled endpoint, or a proxy service should be passed in as an option into the read function.
```javascript <!DOCTYPE html>
In Azure Maps, GeoJSON is the main data format used in the web SDK, additional s
</html> ```
-![Azure Maps KML](media/migrate-google-maps-web-app/azure-maps-kml.png)</center>
+![Azure Maps KML](media/migrate-google-maps-web-app/azure-maps-kml.png)
-**Additional resources:**
+**More resources:**
-- [atlas.io.read function](/javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-)-- [SimpleDataLayer](/javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer)-- [SimpleDataLayerOptions](/javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions)
+* [atlas.io.read function]
+* [SimpleDataLayer]
+* [SimpleDataLayerOptions]
-## Additional code samples
+## More code samples
-The following are some additional code samples related to Google Maps migration:
+The following are some more code samples related to Google Maps migration:
-* [Drawing tools](map-add-drawing-toolbar.md)
-* [Limit Map to Two Finger Panning](https://samples.azuremaps.com/?sample=limit-map-to-two-finger-panning)
-* [Limit Scroll Wheel Zoom](https://samples.azuremaps.com/?sample=limit-scroll-wheel-zoom)
-* [Create a Fullscreen Control](https://samples.azuremaps.com/?sample=fullscreen-control)
+* [Drawing tools]
+* [Limit Map to Two Finger Panning]
+* [Limit Scroll Wheel Zoom]
+* [Create a Fullscreen Control]
**
-* [Using the Azure Maps services module](how-to-use-services-module.md)
-* [Search for points of interest](map-search-location.md)
-* [Get information from a coordinate (reverse geocode)](map-get-information-from-coordinate.md)
-* [Show directions from A to B](map-route.md)
-* [Search Autosuggest with JQuery UI](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui)
+* [Using the Azure Maps services module]
+* [Search for points of interest]
+* [Get information from a coordinate (reverse geocode)]
+* [Show directions from A to B]
+* [Search Autosuggest with JQuery UI]
## Google Maps V3 to Azure Maps Web SDK class mapping
The following appendix provides a cross reference of the commonly used classes i
| `google.maps.PolygonOptions` |[atlas.layer.PolygonLayer](/javascript/api/azure-maps-control/atlas.layer.polygonlayer)<br/> [atlas.PolygonLayerOptions](/javascript/api/azure-maps-control/atlas.polygonlayeroptions)<br/> [atlas.layer.LineLayer](/javascript/api/azure-maps-control/atlas.layer.linelayer)<br/> [atlas.LineLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions)| | `google.maps.Polyline` | [atlas.data.LineString](/javascript/api/azure-maps-control/atlas.data.linestring) | | `google.maps.PolylineOptions` | [atlas.layer.LineLayer](/javascript/api/azure-maps-control/atlas.layer.linelayer)<br/>[atlas.LineLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions) |
-| `google.maps.Circle` | See [Add a circle to the map](map-add-shape.md#add-a-circle-to-the-map) |
+| `google.maps.Circle` | See [Add a circle to the map] |
| `google.maps.ImageMapType` | [atlas.TileLayer](/javascript/api/azure-maps-control/atlas.layer.tilelayer) | | `google.maps.ImageMapTypeOptions` | [atlas.TileLayerOptions](/javascript/api/azure-maps-control/atlas.tilelayeroptions) | | `google.maps.GroundOverlay` | [atlas.layer.ImageLayer](/javascript/api/azure-maps-control/atlas.layer.imagelayer)<br/>[atlas.ImageLayerOptions](/javascript/api/azure-maps-control/atlas.imagelayeroptions) |
The Azure Maps Web SDK includes a services module, which can be loaded separatel
| `google.maps.GeocoderRequest` | [atlas.SearchAddressOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressoptions)<br/>[atlas.SearchAddressRevrseOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressreverseoptions)<br/>[atlas.SearchAddressReverseCrossStreetOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressreversecrossstreetoptions)<br/>[atlas.SearchAddressStructuredOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressstructuredoptions)<br/>[atlas.SearchAlongRouteOptions](/javascript/api/azure-maps-rest/atlas.service.searchalongrouteoptions)<br/>[atlas.SearchFuzzyOptions](/javascript/api/azure-maps-rest/atlas.service.searchfuzzyoptions)<br/>[atlas.SearchInsideGeometryOptions](/javascript/api/azure-maps-rest/atlas.service.searchinsidegeometryoptions)<br/>[atlas.SearchNearbyOptions](/javascript/api/azure-maps-rest/atlas.service.searchnearbyoptions)<br/>[atlas.SearchPOIOptions](/javascript/api/azure-maps-rest/atlas.service.searchpoioptions)<br/>[atlas.SearchPOICategoryOptions](/javascript/api/azure-maps-rest/atlas.service.searchpoicategoryoptions) | | `google.maps.DirectionsService` | [atlas.service.RouteUrl](/javascript/api/azure-maps-rest/atlas.service.routeurl) | | `google.maps.DirectionsRequest` | [atlas.CalculateRouteDirectionsOptions](/javascript/api/azure-maps-rest/atlas.service.calculateroutedirectionsoptions) |
-| `google.maps.places.PlacesService` | [atlas.service.SearchUrl](/javascript/api/azure-maps-rest/atlas.service.searchurl) |
+| `google.maps.places.PlacesService` | [f](/javascript/api/azure-maps-rest/atlas.service.searchurl) |
## Libraries
-Libraries add additional functionality to the map. Many of these libraries are in
+Libraries add more functionality to the map. Many of these libraries are in
the core SDK of Azure Maps. Here are some equivalent classes to use in place of these Google Maps libraries
Learn more about migrating to Azure Maps:
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [free account]: https://azure.microsoft.com/free/ [manage authentication in Azure Maps]: how-to-manage-authentication.md+
+[road tiles]: /rest/api/maps/render/getmaptile
+[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+
+[Cesium documentation]: https://www.cesium.com/
+[Leaflet code sample]: https://samples.azuremaps.com/?sample=render-azure-maps-in-leaflet
+[Leaflet documentation]: https://leafletjs.com/
+[OpenLayers documentation]: https://openlayers.org/
+
+[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
+[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components
+[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
+[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
+
+[*atlas.data* namespace]: /javascript/api/azure-maps-control/atlas.data
+[*atlas.Shape*]: /javascript/api/azure-maps-control/atlas.shape
+[atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
+
+[npm module]: how-to-use-map-control.md
+
+[Load a map]: #load-a-map
+[Localizing the map]: #localizing-the-map
+[Setting the map view]: #setting-the-map-view
+[Adding a marker]: #adding-a-marker
+[Adding a custom marker]: #adding-a-custom-marker
+[Adding a polyline]: #adding-a-polyline
+[Adding a polygon]: #adding-a-polygon
+[Display an info window]: #display-an-info-window
+[Import a GeoJSON file]: #import-a-geojson-file
+[Marker clustering]: #marker-clustering
+[Add a heat map]: #add-a-heat-map
+[Overlay a tile layer]: #overlay-a-tile-layer
+[Show traffic data]: #show-traffic-data
+[Add a ground overlay]: #add-a-ground-overlay
+[Add KML data to the map]: #add-kml-data-to-the-map
+
+[Use the Azure Maps map control]: how-to-use-map-control.md
+[Add controls to a map]: map-add-controls.md
+[Localization support in Azure Maps]: supported-languages.md
+
+[Choose a map style]: choose-map-style.md
+[Supported map styles]: supported-map-styles.md
+
+[Create a data source]: create-data-source-web-sdk.md
+[Add a Symbol layer]: map-add-pin.md
+[Add a Bubble layer]: map-add-bubble-layer.md
+[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md
+[Add HTML Markers]: map-add-custom-html.md
+[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[Symbol layer icon options]: /javascript/api/azure-maps-control/atlas.iconoptions
+[Symbol layer text option]: /javascript/api/azure-maps-control/atlas.textoptions
+[HTML marker class]: /javascript/api/azure-maps-control/atlas.htmlmarker
+[HTML marker options]: /javascript/api/azure-maps-control/atlas.htmlmarkeroptions
+
+[Add lines to the map]: map-add-line-layer.md
+[Line layer options]: /javascript/api/azure-maps-control/atlas.linelayeroptions
+
+[Add a polygon to the map]: map-add-shape.md
+[Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
+[Polygon layer options]: /javascript/api/azure-maps-control/atlas.polygonlayeroptions
+
+[Add a popup]: map-add-popup.md
+[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content
+[Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes
+[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins
+[Popup class]: /javascript/api/azure-maps-control/atlas.popup
+[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
+[spatial IO module]: /javascript/api/azure-maps-spatial-io/
+
+[Add a heat map layer]: map-add-heat-map-layer.md
+[Heat map layer class]: /javascript/api/azure-maps-control/atlas.layer.heatmaplayer
+[Heat map layer options]: /javascript/api/azure-maps-control/atlas.heatmaplayeroptions
+
+[Add tile layers]: map-add-tile-layer.md
+[Tile layer class]: /javascript/api/azure-maps-control/atlas.layer.tilelayer
+[Tile layer options]: /javascript/api/azure-maps-control/atlas.tilelayeroptions
+
+[Show traffic on the map]: map-show-traffic.md
+[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options
+
+[`atlas.layer.ImageLayer.getCoordinatesFromEdges`]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-
+[Overlay an image]: map-add-image-layer.md
+[Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
+
+[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-
+[SimpleDataLayer]: /javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer
+[SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions
+[Drawing tools]: map-add-drawing-toolbar.md
+[Limit Map to Two Finger Panning]: https://samples.azuremaps.com/?sample=limit-map-to-two-finger-panning
+[Limit Scroll Wheel Zoom]: https://samples.azuremaps.com/?sample=limit-scroll-wheel-zoom
+[Create a Fullscreen Control]: https://samples.azuremaps.com/?sample=fullscreen-control
+[Using the Azure Maps services module]: how-to-use-services-module.md
+[Search for points of interest]: map-search-location.md
+[Get information from a coordinate (reverse geocode)]: map-get-information-from-coordinate.md
+[Show directions from A to B]: map-route.md
+[Search Autosuggest with JQuery UI]: https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Both Azure and Google Maps provide access to spatial APIs through REST web services. The API interfaces of these platforms perform similar functionalities. But, they each use different naming conventions and response objects.
-In this tutorial, you will learn how to:
+This tutorial demonstrates how to:
> [!div class="checklist"] > * Forward and reverse geocoding
In this tutorial, you will learn how to:
> * Calculate a distance matrix > * Get time zone details
-You will also learn:
+You'll also learn:
> [!div class="checklist"] > * Which Azure Maps REST service when migrating from a Google Maps Web Service
You will also learn:
The table shows the Azure Maps service APIs, which have a similar functionality to the listed Google Maps service APIs.
-| Google Maps service API | Azure Maps service API |
-|-|--|
-| Directions | [Route](/rest/api/maps/route) |
-| Distance Matrix | [Route Matrix](/rest/api/maps/route/postroutematrixpreview) |
-| Geocoding | [Search](/rest/api/maps/search) |
-| Places Search | [Search](/rest/api/maps/search) |
-| Place Autocomplete | [Search](/rest/api/maps/search) |
-| Snap to Road | See [Calculate routes and directions](#calculate-routes-and-directions) section. |
-| Speed Limits | See [Reverse geocode a coordinate](#reverse-geocode-a-coordinate) section. |
-| Static Map | [Render](/rest/api/maps/render/getmapimage) |
-| Time Zone | [Time Zone](/rest/api/maps/timezone) |
-| Elevation | [Elevation](/rest/api/maps/elevation)<sup>1</sup> |
-
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+| Google Maps service API | Azure Maps service API |
+|-||
+| Directions | [Route] |
+| Distance Matrix | [Route Matrix] |
+| Geocoding | [Search] |
+| Places Search | [Search] |
+| Place Autocomplete | [Search] |
+| Snap to Road | See [Calculate routes and directions] section. |
+| Speed Limits | See [Reverse geocode a coordinate] section. |
+| Static Map | [Render] |
+| Time Zone | [Time Zone] |
+| Elevation | [Elevation]<sup>1</sup> |
+
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
The following service APIs aren't currently available in Azure Maps: -- Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but does not currently support cell tower or WiFi triangulation.-- Places details and photos - Phone numbers and website URL are available in the Azure Maps search API.-- Map URLs-- Nearest Roads - This is achievable using the Web SDK as shown [here](https://samples.azuremaps.com/?sample=basic-snap-to-road-logic), but not available as a service currently.-- Static street view
+* Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but doesn't currently support cell tower or WiFi triangulation.
+* Places details and photos - Phone numbers and website URL are available in the Azure Maps search API.
+* Map URLs
+* Nearest Roads - This is achievable using the Web SDK as demonstrated in the [Basic snap to road logic] sample, but is not currently available as a service.
+* Static street view
Azure Maps has several other REST web services that may be of interest: -- [Spatial operations](/rest/api/maps/spatial): Offload complex spatial calculations and operations, such as geofencing, to a service.-- [Traffic](/rest/api/maps/traffic): Access real-time traffic flow and incident data.
+* [Spatial operations]: Offload complex spatial calculations and operations, such as geofencing, to a service.
+* [Traffic]: Access real-time traffic flow and incident data.
## Prerequisites
Geocoding is the process of converting an address into a coordinate. For example
Azure Maps provides several methods for geocoding addresses: -- [**Free-form address geocoding**](/rest/api/maps/search/getsearchaddress): Specify a single address string and process the request immediately. "1 Microsoft way, Redmond, WA" is an example of a single address string. This API is recommended if you need to geocode individual addresses quickly.-- [**Structured address geocoding**](/rest/api/maps/search/getsearchaddressstructured): Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This API is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.-- [**Batch address geocoding**](/rest/api/maps/search/postsearchaddressbatchpreview): Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses will be geocoded in parallel on the server and when completed the full result set can be downloaded. This is recommended for geocoding large data sets.-- [**Fuzzy search**](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string. This string can be an address, place, landmark, point of interest, or point of interest category. This API process the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.-- [**Fuzzy batch search**](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* **[Free-form address geocoding]**: Specify a single address string and process the request immediately. "1 Microsoft way, Redmond, WA" is an example of a single address string. This API is recommended if you need to geocode individual addresses quickly.
+* **[Structured address geocoding]**: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This API is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* **[Batch address geocoding]**: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This is recommended for geocoding large data sets.
+* **[Fuzzy search]**: This API combines address geocoding with point of interest search. This API takes in a free-form string. This string can be an address, place, landmark, point of interest, or point of interest category. This API process the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.
+* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross-references the Google Maps API parameters with the com
| `address` | `query` | | `bounds` | `topLeft` and `btmRight` | | `components` | `streetNumber`<br/>`streetName`<br/>`crossStreet`<br/>`postalCode`<br/>`municipality` - city / town<br/>`municipalitySubdivision` ΓÇô neighborhood, sub / super city<br/>`countrySubdivision` - state or province<br/>`countrySecondarySubdivision` - county<br/>`countryTertiarySubdivision` - district<br/>`countryCode` - two letter country/region code |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `region` | `countrySet` |
-An example of how to use the search service is documented [here](how-to-search-for-address.md). Be sure to review [best practices for search](how-to-use-best-practices-for-search.md).
+For more information on using the search service, see [Search for a location using Azure Maps Search services]. Be sure to review [best practices for search].
> [!TIP] > The free-form address geocoding and fuzzy search APIs can be used in autocomplete mode by adding `&typeahead=true` to the request URL. This will tell the server that the input text is likely partial, and the search will go into predictive mode.
Reverse geocoding is the process of converting geographic coordinates into an ap
Azure Maps provides several reverse geocoding methods: -- [**Address reverse geocoder**](/rest/api/maps/search/getsearchaddressreverse): Specify a single geographic coordinate to get the approximate address corresponding to this coordinate. Processes the request near real time.-- [**Cross street reverse geocoder**](/rest/api/maps/search/getsearchaddressreversecrossstreet): Specify a single geographic coordinate to get nearby cross street information and process the request immediately. For example, you may receive the following cross streets 1st Ave and Main St.-- [**Batch address reverse geocoder**](/rest/api/maps/search/postsearchaddressreversebatchpreview): Create a request containing up to 10,000 coordinates and have them processed over a period of time. All data will be processed in parallel on the server. When the request completes, you can download the full set of results.
+* **[Address reverse geocoder]**: Specify a single geographic coordinate to get the approximate address corresponding to this coordinate. Processes the request near real time.
+* **[Cross street reverse geocoder]**: Specify a single geographic coordinate to get nearby cross street information and process the request immediately. For example, you may receive the following cross streets 1st Ave and Main St.
+* **[Batch address reverse geocoder]**: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All data is processed in parallel on the server. When the request completes, you can download the full set of results.
This table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps. | Google Maps API parameter | Comparable Azure Maps API parameter | |--||
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `latlng` | `query` | | `location_type` | *N/A* | | `result_type` | `entityType` |
-Review [best practices for search](how-to-use-best-practices-for-search.md).
+For more information, see [best practices for search].
The Azure Maps reverse geocoding API has some other features, which aren't available in Google Maps. These features might be useful to integrate with your application, as you migrate your app:
Point of interest data can be searched in Google Maps using the Places Search AP
Azure Maps provides several search APIs for points of interest: -- [**POI search**](/rest/api/maps/search/getsearchpoi): Search for points of interests by name. For example, "Starbucks".-- [**POI category search**](/rest/api/maps/search/getsearchpoicategory): Search for points of interests by category. For example, "restaurant".-- [**Nearby search**](/rest/api/maps/search/getsearchnearby): Searches for points of interests that are within a certain distance of a location.-- [**Fuzzy search**](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category. It processes the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.-- [**Search within geometry**](/rest/api/maps/search/postsearchinsidegeometry): Search for points of interests that are within a specified geometry. For example, search a point of interest within a polygon.-- [**Search along route**](/rest/api/maps/search/postsearchalongroute): Search for points of interests that are along a specified route path.-- [**Fuzzy batch search**](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests. Processed the request over a period of time. All data will be processed in parallel on the server. When the request completes processing, you can download the full set of result.
+* **[POI search]**: Search for points of interests by name. For example, "Starbucks".
+* **[POI category search]**: Search for points of interests by category. For example, "restaurant".
+* **[Nearby search]**: Searches for points of interests that are within a certain distance of a location.
+* **[Fuzzy search]**: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category. It processes the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.
+* **[Search within geometry]**: Search for points of interests that are within a specified geometry. For example, search a point of interest within a polygon.
+* **[Search along route]**: Search for points of interests that are along a specified route path.
+* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests. Processed the request over a period of time. All data is processed in parallel on the server. When the request completes processing, you can download the full set of result.
Currently Azure Maps doesn't have a comparable API to the Text Search API in Google Maps. > [!TIP] > The POI search, POI category search, and fuzzy search APIs can be used in autocomplete mode by adding `&typeahead=true` to the request URL. This will tell the server that the input text is likely partial.The API will conduct the search in predictive mode.
-Review the [best practices for search](how-to-use-best-practices-for-search.md) documentation.
+For more information, see [best practices for search].
### Find place from text
-Use the Azure Maps [POI search](/rest/api/maps/search/getsearchpoi) and [Fuzzy search](/rest/api/maps/search/getsearchfuzzy) to search for points of interests by name or address.
+Use the Azure Maps [POI search] and [Fuzzy search] to search for points of interests by name or address.
The table cross-references the Google Maps API parameters with the comparable Azure Maps API parameters.
The table cross-references the Google Maps API parameters with the comparable Az
| `fields` | *N/A* | | `input` | `query` | | `inputtype` | *N/A* |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `locationbias` | `lat`, `lon` and `radius`<br/>`topLeft` and `btmRight`<br/>`countrySet` | ### Nearby search
-Use the [Nearby search](/rest/api/maps/search/getsearchnearby) API to retrieve nearby points of interests, in Azure Maps.
+Use the [Nearby search] API to retrieve nearby points of interests, in Azure Maps.
The table shows the Google Maps API parameters with the comparable Azure Maps API parameters. | Google Maps API parameter | Comparable Azure Maps API parameter | ||--|
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
| `keyword` | `categorySet` and `brandSet` |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `location` | `lat` and `lon` | | `maxprice` | *N/A* | | `minprice` | *N/A* |
The table shows the Google Maps API parameters with the comparable Azure Maps AP
| `pagetoken` | `ofs` and `limit` | | `radius` | `radius` | | `rankby` | *N/A* |
-| `type` | `categorySet ΓÇô` See [supported search categories](supported-search-categories.md) documentation. |
+| `type` | `categorySet ΓÇô` For more information, see [supported search categories]. |
## Calculate routes and directions
Calculate routes and directions using Azure Maps. Azure Maps has many of the sam
The Azure Maps routing service provides the following APIs for calculating routes: -- [**Calculate route**](/rest/api/maps/route/getroutedirections): Calculate a route and have the request processed immediately. This API supports both GET and POST requests. POST requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The POST Route Direction in Azure Maps has an option can that take in thousands of [supporting points](/rest/api/maps/route/postroutedirections#supportingpoints) and will use them to recreate a logical route path between them (snap to road). -- [**Batch route**](/rest/api/maps/route/postroutedirectionsbatchpreview): Create a request containing up to 1,000 route request and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* **[Calculate route]**: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The `POST` Route Direction in Azure Maps has an option can that take in thousands of [supporting points] and use them to recreate a logical route path between them (snap to road).
+* **[Batch route]**: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
The table cross-references the Google Maps API parameters with the comparable AP
| `avoid` | `avoid` | | `departure_time` | `departAt` | | `destination` | `query` – coordinates in the format `"lat0,lon0:lat1,lon1…."` |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `mode` | `travelMode` | | `optimize` | `computeBestOrder` | | `origin` | `query` |
Azure Maps routing API has other features that aren't available in Google Maps.
* Support commercial vehicle route parameters. Such as, vehicle dimensions, weight, number of axels, and cargo type. * Specify maximum vehicle speed.
-In addition, the route service in Azure Maps supports [calculating routable ranges](/rest/api/maps/route/getrouterange). Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
+In addition, the route service in Azure Maps supports [calculating routable ranges]. Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
-Review the [best practices for routing](how-to-use-best-practices-for-routing.md) documentation.
+For more information, see [best practices for routing].
## Retrieve a map image
-Azure Maps provides an API for rendering the static map images with data overlaid. The [Map image render](/rest/api/maps/render/getmapimagerytile) API in Azure Maps is comparable to the static map API in Google Maps.
+Azure Maps provides an API for rendering the static map images with data overlaid. The [Map image render] API in Azure Maps is comparable to the static map API in Google Maps.
> [!NOTE] > Azure Maps requires the center, all the marker, and the path locations to be coordinates in "longitude,latitude" format. Whereas, Google Maps uses the "latitude,longitude" format. Addresses will need to be geocoded first.
The table cross-references the Google Maps API parameters with the comparable AP
||--| | `center` | `center` | | `format` | `format` ΓÇô specified as part of URL path. Currently only PNG supported. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `maptype` | `layer` and `style` ΓÇô See [Supported map styles](supported-map-styles.md) documentation. | | `markers` | `pins` | | `path` | `path` |
The table cross-references the Google Maps API parameters with the comparable AP
> [!NOTE] > In the Azure Maps tile system, tiles are twice the size of map tiles used in Google Maps. As such the zoom level value in Azure Maps will appear one zoom level closer in Azure Maps compared to Google Maps. To compensate for this difference, decrement the zoom level in the requests you are migrating.
-For more information, see the [How-to guide on the map image render API](how-to-render-custom-data.md).
+For more information, see [Render custom data on a raster map].
In addition to being able to generate a static map image, the Azure Maps render service provides the ability to directly access map tiles in raster (PNG) and vector format: -- [**Map tile**](/rest/api/maps/render/getmaptile): Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).-- [**Map imagery tile**](/rest/api/maps/render/getmapimagerytile): Retrieve aerial and satellite imagery tiles.
+* **[Map tile]**: Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
+* **[Map imagery tile]**: Retrieve aerial and satellite imagery tiles.
> [!TIP]
-> Many Google Maps applications where switched from interactive map experiences to static map images a few years ago. This was done as a cost saving method. In Azure Maps, it is usually more cost effective to use the interactive map control in the Web SDK. The interactive map control charges based the number of tile loads. Map tiles in Azure Maps are large. Often, it takes only a few tiles to recreate the same map view as a static map. Map tiles are cached automatically by the browser. As such, the interactive map control often generates a fraction of a transaction when reproducing a static map view. Panning and zooming will load more tiles; however, there are options in the map control to disable this behavior. The interactive map control also provides a lot more visualization options than the static map services.
+> Many Google Maps applications were switched from interactive map experiences to static map images a few years ago. This was done as a cost saving method. In Azure Maps, it is usually more cost effective to use the interactive map control in the Web SDK. The interactive map control charges based the number of tile loads. Map tiles in Azure Maps are large. Often, it takes only a few tiles to recreate the same map view as a static map. Map tiles are cached automatically by the browser. As such, the interactive map control often generates a fraction of a transaction when reproducing a static map view. Panning and zooming will load more tiles; however, there are options in the map control to disable this behavior. The interactive map control also provides a lot more visualization options than the static map services.
### Marker URL parameter format comparison
In Azure Maps, the pin location needs to be in the "longitude latitude" format.
The `iconType` specifies the type of pin to create. It can have the following values: * `default` ΓÇô The default pin icon.
-* `none` ΓÇô No icon is displayed, only labels will be rendered.
+* `none` ΓÇô No icon is displayed, only labels are rendered.
* `custom` ΓÇô Specifies a custom icon is to be used. A URL pointing to the icon image can be added to the end of the `pins` parameter after the pin location information. * `{udid}` ΓÇô A Unique Data ID (UDID) for an icon stored in the Azure Maps Data Storage platform.
Add path styles with the `optionName:value` format, separate multiple styles by
* `geodesic` ΓÇô Indicates if the path should be a line that follows the curvature of the earth. * `weight` ΓÇô The thickness of the path line in pixels.
-Add a red line opacity and pixel thickness to the map between the coordinates, in the URL parameter. For the example below, the line has a 50% opacity and a thickness of four pixels. The coordinates are longitude: -110, latitude: 45 and longitude: -100, latitude: 50.
+Add a red line opacity and pixel thickness to the map between the coordinates, in the URL parameter. For the following example, the line has a 50% opacity and a thickness of four pixels. The coordinates are longitude: -110, latitude: 45 and longitude: -100, latitude: 50.
```text &path=color:0xFF000088|weight:4|45,-110|50,-100
Add lines and polygons to a static map image by specifying the `path` parameter
&path=pathStyles||pathLocation1|pathLocation2|... ```
-When it comes to path locations, Azure Maps requires the coordinates to be in "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format. Azure Maps doesn't support encoded paths or addresses for points. Upload larger data sets as a GeoJSON file into the Azure Maps Data Storage API as documented [here](how-to-render-custom-data.md#upload-pins-and-path-data).
+When it comes to path locations, Azure Maps requires the coordinates to be in "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format. Azure Maps doesn't support encoded paths or addresses for points. For more information on how to Upload larger data sets as a GeoJSON file into the Azure Maps Data Storage API, see [Upload pins and path data].
Add path styles with the `optionNameValue` format. Separate multiple styles by pipe (\|) characters, like this `optionName1Value1|optionName2Value2`. The option names and values aren't separated. Use the following style option names to style paths in Azure Maps:
Add path styles with the `optionNameValue` format. Separate multiple styles by p
* `lw` ΓÇô The width of the line in pixels. * `ra` ΓÇô Specifies a circles radius in meters.
-Add a red line opacity and pixel thickness between the coordinates, in the URL parameter. For the example below, the line has 50% opacity and a thickness of four pixels. The coordinates have the following values: longitude: -110, latitude: 45 and longitude: -100, latitude: 50.
+Add a red line opacity and pixel thickness between the coordinates, in the URL parameter. For the following example, the line has 50% opacity and a thickness of four pixels. The coordinates have the following values: longitude: -110, latitude: 45 and longitude: -100, latitude: 50.
```text &path=lcFF0000|la.5|lw4||-110 45|-100 50
Add a red line opacity and pixel thickness between the coordinates, in the URL p
Azure Maps provides the distance matrix API. Use this API to calculate the travel times and the distances between a set of locations, with a distance matrix. It's comparable to the distance matrix API in Google Maps. -- [**Route matrix**](/rest/api/maps/route/postroutematrixpreview): Asynchronously calculates travel times and distances for a set of origins and destinations. Supports up to 700 cells per request. That's the number of origins multiplied by the number of destinations. With that constraint in mind, examples of possible matrix dimensions are: 700x1, 50x10, 10x10, 28x25, 10x70.
+* **[Route matrix]**(/rest/api/maps/route/postroutematrixpreview): Asynchronously calculates travel times and distances for a set of origins and destinations. Supports up to 700 cells per request. That's the number of origins multiplied by the number of destinations. With that constraint in mind, examples of possible matrix dimensions are: 700x1, 50x10, 10x10, 28x25, 10x70.
> [!NOTE]
-> A request to the distance matrix API can only be made using a POST request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
+> A request to the distance matrix API can only be made using a `POST` request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
This table cross-references the Google Maps API parameters with the comparable Azure Maps API parameters.
This table cross-references the Google Maps API parameters with the comparable A
| `arrivial_time` | `arriveAt` | | `avoid` | `avoid` | | `depature_time` | `departAt` |
-| `destinations` | `destination` ΓÇô specify in the POST request body as GeoJSON. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `destinations` | `destination` ΓÇô specify in the `POST` request body as GeoJSON. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `mode` | `travelMode` |
-| `origins` | `origins` ΓÇô specify in the POST request body as GeoJSON. |
+| `origins` | `origins` ΓÇô specify in the `POST` request body as GeoJSON. |
| `region` | *N/A* ΓÇô This feature is geocoding related. Use the `countrySet` parameter when using the Azure Maps geocoding API. | | `traffic_model` | *N/A* ΓÇô Can only specify if traffic data should be used with the `traffic` parameter. | | `transit_mode` | *N/A* - Transit-based distance matrices aren't currently supported. |
This table cross-references the Google Maps API parameters with the comparable A
> [!TIP] > All the advanced routing options available in the Azure Maps routing API are supported in the Azure Maps distance matrix API. Advanced routing options include: truck routing, engine specifications, and so on.
-Review the [best practices for routing](how-to-use-best-practices-for-routing.md) documentation.
+For more information, see [best practices for routing].
## Get a time zone Azure Maps provides an API for retrieving the time zone of a coordinate. The Azure Maps time zone API is comparable to the time zone API in Google Maps: -- [**Time zone by coordinate**](/rest/api/maps/timezone/gettimezonebycoordinates): Specify a coordinate and receive the time zone details of the coordinate.
+* **[Time zone by coordinate]**(/rest/api/maps/timezone/gettimezonebycoordinates): Specify a coordinate and receive the time zone details of the coordinate.
This table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps. | Google Maps API parameter | Comparable Azure Maps API parameter | |||
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `location` | `query` | | `timestamp` | `timeStamp` | In addition to this API, Azure Maps provides many time zone APIs. These APIs convert the time based on the names or the IDs of the time zone: -- [**Time zone by ID**](/rest/api/maps/timezone/gettimezonebyid): Returns current, historical, and future time zone information for the specified IANA time zone ID.-- [**Time zone Enum IANA**](/rest/api/maps/timezone/gettimezoneenumiana): Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.-- [**Time zone Enum Windows**](/rest/api/maps/timezone/gettimezoneenumwindows): Returns a full list of Windows Time Zone IDs.-- [**Time zone IANA version**](/rest/api/maps/timezone/gettimezoneianaversion): Returns the current IANA version number used by Azure Maps.-- [**Time zone Windows to IANA**](/rest/api/maps/timezone/gettimezonewindowstoiana): Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
+* **[Time zone by ID]**: Returns current, historical, and future time zone information for the specified IANA time zone ID.
+* **[Time zone Enum IANA]**: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
+* **[Time zone Enum Windows]**: Returns a full list of Windows Time Zone IDs.
+* **[Time zone IANA version]**: Returns the current IANA version number used by Azure Maps.
+* **[Time zone Windows to IANA]**: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
## Client libraries Azure Maps provides client libraries for the following programming languages:
-* JavaScript, TypeScript, Node.js ΓÇô [documentation](how-to-use-services-module.md) \| [npm package](https://www.npmjs.com/package/azure-maps-rest)
+* JavaScript, TypeScript, Node.js ΓÇô [documentation] \| [npm package]
These Open-source client libraries are for other programming languages:
-* .NET Standard 2.0 ΓÇô [GitHub project](https://github.com/perfahlen/AzureMapsRestServices) \| [NuGet package](https://www.nuget.org/packages/AzureMapsRestToolkit/)
+* .NET Standard 2.0 ΓÇô [GitHub project] \| [NuGet package]
## Clean up resources
Learn more about Azure Maps REST
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [free account]: https://azure.microsoft.com/free/ [manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Route]: /rest/api/maps/route
+[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
+[Search]: /rest/api/maps/search
+[Calculate routes and directions]: #calculate-routes-and-directions
+[Reverse geocode a coordinate]: #reverse-geocode-a-coordinate
+[Render]: /rest/api/maps/render/getmapimage
+[Time Zone]: /rest/api/maps/timezone
+[Elevation]: /rest/api/maps/elevation
+[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
+[Spatial operations]: /rest/api/maps/spatial
+[Traffic]: /rest/api/maps/traffic
+[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
+[best practices for search]: how-to-use-best-practices-for-search.md
+
+[Localization support in Azure Maps]: supported-languages.md
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[supported search categories]: supported-search-categories.md
+
+[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
+[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
+[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
+[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
+
+[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
+[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
+[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
+
+[POI search]: /rest/api/maps/search/getsearchpoi
+[POI category search]: /rest/api/maps/search/getsearchpoicategory
+[Nearby search]: /rest/api/maps/search/getsearchnearby
+[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[Search along route]: /rest/api/maps/search/postsearchalongroute
+
+[supporting points]: /rest/api/maps/route/postroutedirections#supportingpoints
+[Calculate route]: /rest/api/maps/route/getroutedirections
+[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
+
+[calculating routable ranges]: /rest/api/maps/route/getrouterange
+[best practices for routing]: how-to-use-best-practices-for-routing.md
+[Map image render]: /rest/api/maps/render/getmapimagerytile
+[Render custom data on a raster map]: how-to-render-custom-data.md
+
+[Map tile]: /rest/api/maps/render/getmaptile
+[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
+[Upload pins and path data]: how-to-render-custom-data.md#upload-pins-and-path-data
+[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
+[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
+[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
+[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
+[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
+
+[documentation]: how-to-use-services-module.md
+[npm package]: https://www.npmjs.com/package/azure-maps-rest
+[GitHub project]: https://github.com/perfahlen/AzureMapsRestServices
+[NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
# Tutorial: Migrate from Google Maps to Azure Maps
-This article provides insights on how to migrate web, mobile and server-based applications from Google Maps to the Microsoft Azure Maps platform. This tutorial includes comparative code samples, migration suggestions, and best practices for migrating to Azure Maps. In this tutorial, you'll learn:
+This article provides insights on how to migrate web, mobile and server-based applications from Google Maps to the Microsoft Azure Maps platform. This tutorial includes comparative code samples, migration suggestions, and best practices for migrating to Azure Maps. This tutorial demonstrates:
> [!div class="checklist"] > * High-level comparison for equivalent Google Maps features available in Azure Maps.
If you don't have an Azure subscription, create a [free account] before you begi
* A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
+> For more information on authentication in Azure Maps, see [Manage authentication in Azure Maps].
## Azure Maps platform overview
Azure Maps provides developers from all industries powerful geospatial capabilit
## High-level platform comparison
-The table provides a high-level list of Azure Maps features, which correspond to Google Maps features. This list doesn't show all Azure Maps features. Additional Azure Maps features include: accessibility, geofencing, isochrones, spatial operations, direct map tile access, batch services, and data coverage comparisons (that is, imagery coverage).
+The table provides a high-level list of Azure Maps features, which correspond to Google Maps features. This list doesn't show all Azure Maps features. Other Azure Maps features include: accessibility, geofencing, isochrones, spatial operations, direct map tile access, batch services, and data coverage comparisons (that is, imagery coverage).
| Google Maps feature | Azure Maps support | |--|:--:|
The table provides a high-level list of Azure Maps features, which correspond to
| Maps Embedded API | N/A | | Map URLs | N/A |
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+<sup>1</sup> Azure Maps [Elevation services] have been [deprecated]. For more information how to include this functionality in your Azure Maps, see [Create elevation data & services].
Google Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and Azure Active Directory authentication. Azure Active Directory authentication provides more security features, compared to the basic key-based authentication.
When migrating to Azure Maps from Google Maps, consider the following points abo
* Azure Maps charges for the usage of interactive maps, which is based on the number of loaded map tiles. On the other hand, Google Maps charges for loading the map control. In the interactive Azure Maps SDKs, map tiles are automatically cached to reduce the development cost. One Azure Maps transaction is generated for every 15 map tiles that are loaded. The interactive Azure Maps SDKs uses 512-pixel tiles, and on average, it generates one or less transactions per page view. * Often, it's more cost effective to replace static map images from Google Maps web services with the Azure Maps Web SDK. The Azure Maps Web SDK uses map tiles. Unless the user pans and zooms the map, the service often generates only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming, if desired. Additionally, the Azure Maps web SDK provides a lot more visualization options than the static map web service.
-* Azure Maps allows data from its platform to be stored in Azure. Also, data can be cached elsewhere for up to six months as per the [terms of use](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46).
+* Azure Maps allows data from its platform to be stored in Azure. Also, data can be cached elsewhere for up to six months as per the [terms of use].
Here are some related resources for Azure Maps:
-* [Azure Maps pricing page](https://azure.microsoft.com/pricing/details/azure-maps/)
-* [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=azure-maps)
-* [Azure Maps term of use](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46)
- (included in the Microsoft Online Services Terms)
-* [Choose the right pricing tier in Azure Maps](./choose-pricing-tier.md)
+* [Azure Maps pricing page]
+* [Azure pricing calculator]
+* [Choose the right pricing tier in Azure Maps]
+* [Azure Maps term of use] - included in the Microsoft Online Services Terms.
## Suggested migration plan
-The following is a high-level migration plan.
+A high-level migration plan includes.
1. Take inventory of the Google Maps SDKs and services that your application uses. Verify that Azure Maps provides alternative SDKs and services.
-2. If you don't already have one, create an Azure subscription at [https://azure.com](https://azure.com).
-3. Create an Azure Maps account ([documentation](./how-to-manage-account-keys.md)) and authentication key or Azure Active Directory ([documentation](./how-to-manage-authentication.md)).
+2. If you don't already have one, create an [Azure subscription].
+3. Create an [Azure Maps account] and [subscription key] or [Azure Active Directory authentication].
4. Migrate your application code. 5. Test your migrated application. 6. Deploy your migrated application to production.
The following is a high-level migration plan.
To create an Azure Maps account and get access to the Azure Maps platform, follow these steps:
-1. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. Sign in to the [Azure portal](https://portal.azure.com/).
-3. Create an [Azure Maps account](./how-to-manage-account-keys.md).
-4. [Get your Azure Maps subscription key](./how-to-manage-authentication.md#view-authentication-details) or setup Azure Active Directory authentication for enhanced security.
+1. If you don't have an Azure subscription, create a [free account] before you begin.
+2. Sign in to the [Azure portal].
+3. Create an [Azure Maps account].
+4. Get your Azure Maps [subscription key] or [Azure Active Directory authentication] for enhanced security.
## Azure Maps technical resources Here's a list of useful technical resources for Azure Maps. -- Overview: [https://azure.com/maps](https://azure.com/maps)-- Documentation: [https://aka.ms/AzureMapsDocs](./index.yml)-- Web SDK Code Samples: [https://aka.ms/AzureMapsSamples](https://aka.ms/AzureMapsSamples)-- Developer Forums: [https://aka.ms/AzureMapsForums](/answers/topics/azure-maps.html)-- Videos: [https://aka.ms/AzureMapsVideos](/shows/)-- Blog: [https://aka.ms/AzureMapsBlog](https://aka.ms/AzureMapsBlog)-- Tech Blog: [https://aka.ms/AzureMapsTechBlog](https://aka.ms/AzureMapsTechBlog)-- Azure Maps Feedback (UserVoice): [https://aka.ms/AzureMapsFeedback](/answers/topics/25319/azure-maps.html)-- [Azure Maps Jupyter Notebook](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook)
+* [Azure Maps product page]
+* [Azure Maps product documentation]
+* [Azure Maps Web SDK code samples]
+* [Azure Maps developer forums]
+* [Microsoft learning center shows]
+* [Azure Maps Blog]
+* [Azure Maps Q&A]
## Migration support
-Developers can seek migration support through the [forums](/answers/topics/azure-maps.html) or through one of the many Azure support options: [https://azure.microsoft.com/support/options](https://azure.microsoft.com/support/options)
+Developers can seek migration support through the [Azure Maps developer forums] or through one of the many [Azure support options].
## Clean up resources
Learn the details of how to migrate your Google Maps application with these arti
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [free account]: https://azure.microsoft.com/free/
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Azure subscription]: https://azure.com
+[Azure portal]: https://portal.azure.com/
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
+[terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
+[Azure Maps pricing page]: https://azure.microsoft.com/pricing/details/azure-maps/
+[Azure pricing calculator]: https://azure.microsoft.com/pricing/calculator/?service=azure-maps
+[Azure Maps term of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+
+[Azure Maps product page]: https://azure.com/maps
+[Azure Maps product documentation]: https://aka.ms/AzureMapsDocs
+[Azure Maps Web SDK code samples]: https://aka.ms/AzureMapsSamples
+[Azure Maps developer forums]: https://aka.ms/AzureMapsForums
+[Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
+[Azure Maps Blog]: https://aka.ms/AzureMapsBlog
+[Azure Maps Q&A]: https://aka.ms/AzureMapsFeedback
+
+[Azure support options]: https://azure.microsoft.com/support/options
+
+<!->
+[Elevation services]: /rest/api/maps/elevation
+[deprecated]: https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023
+[Create elevation data & services]: elevation-data-services.md
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (preview)
+### [3.0.0-preview.6] (March 31, 2023)
+
+#### Installation (3.0.0-preview.6)
+
+The preview is available on [npm][3.0.0-preview.6] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.6][3.0.0-preview.6]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.6/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.6/atlas.min.js"></script>
+ ```
+
+#### New features (3.0.0-preview.6)
+
+- Optimized the internal style transform performance.
+
+#### Bug fixes (3.0.0-preview.6)
+
+- Resolved an issue where the first style set request was unauthenticated for `AAD` authentication.
+
+- Eliminated redundant requests during map initialization and on style changed events.
+ ### [3.0.0-preview.5] (March 15, 2023) #### Installation (3.0.0-preview.5)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2 (latest)
+### [2.2.6]
+
+#### Bug fixes (2.2.6)
+
+- Resolved an issue where the first style set request was unauthenticated for `AAD` authentication.
+
+- Eliminated redundant requests during map initialization and on style changed events.
+ ### [2.2.5] #### New features (2.2.5)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0-preview.6]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.6
[3.0.0-preview.5]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.5 [3.0.0-preview.4]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.4 [3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.2.6]: https://www.npmjs.com/package/azure-maps-control/v/2.2.6
[2.2.5]: https://www.npmjs.com/package/azure-maps-control/v/2.2.5 [2.2.4]: https://www.npmjs.com/package/azure-maps-control/v/2.2.4 [2.2.3]: https://www.npmjs.com/package/azure-maps-control/v/2.2.3
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
The following table summarizes the Azure Maps services that generate transaction
| [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2)<br>[Data registry](/rest/api/maps/data-registry) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| | [Elevation (DEM)](/rest/api/maps/elevation)([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023))| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>| | [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>|
-| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
+| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
| [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Search v1](/rest/api/maps/search)<br>[Search v2](/rest/api/maps/search-v2) | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction. Note, the billable Search transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Spatial](/rest/api/maps/spatial) | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch`, which are non-billable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> |
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
For more information, review the [Azure SDK Lifecycle and Support Policy](https:
> [!NOTE] > Diagnostic tools often provide better insight into the root cause of a problem when the latest stable SDK version is used.
+## SDK update guidance
+ Support engineers are expected to provide SDK update guidance according to the following table, referencing the current SDK version in use and any alternatives. |Current SDK version in use |Alternative version available |Update policy for support |
Support engineers are expected to provide SDK update guidance according to the f
> [!WARNING] > Only commercially reasonable support is provided for Preview versions of the SDK. If a support incident requires escalation to development for further guidance, customers will be asked to use a fully supported SDK version to continue support. Commercially reasonable support does not include an option to engage Microsoft product development resources; technical workarounds may be limited or not possible.
-To see the current version of Application Insights SDKs and previous versions release dates, reference the [release notes](release-notes.md).
+## Release notes
+
+Reference the release notes to see the current version of Application Insights SDKs and previous versions release dates.
+
+- [.NET SDKs (Including ASP.NET, ASP.NET Core, and Logging Adapters)](https://github.com/Microsoft/ApplicationInsights-dotnet/releases)
+- [Python](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/CHANGELOG.md)
+- [Node.js](https://github.com/Microsoft/ApplicationInsights-node.js/releases)
+- [JavaScript](https://github.com/microsoft/ApplicationInsights-JS/releases)
+
+Our [Service Updates](https://azure.microsoft.com/updates/?service=application-insights) also summarize major Application Insights improvements.
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
The new fields are:
LogSource: string, TimeGenerated: datetime ```+
+>[!NOTE]
+> [Export](../logs/logs-data-export.md) to Event Hub and Storage Account is not supported if the incoming LogMessage is not a valid JSON. For best performance, we recommend emitting container logs in JSON format.
+ ## Enable the ContainerLogV2 schema Customers can enable the ContainerLogV2 schema at the cluster level. To enable the ContainerLogV2 schema, configure the cluster's ConfigMap. Learn more about ConfigMap in [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and in [Azure Monitor documentation](./container-insights-agent-config.md#configmap-file-settings-overview). Follow the instructions to configure an existing ConfigMap or to use a new one.
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
na Previously updated : 03/07/2023 Last updated : 03/31/2023
The following diagram demonstrates how customer-managed keys work with Azure Net
> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week from submitting waitlist request. * Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption.
-* To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page.
-* Switching from user-assigned identity to the system-assigned identity isn't currently supported.
+* To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in [Set the Network Features option](configure-network-features.md#set-the-network-features-option) to create a volume.
* MSI Automatic certificate renewal isn't currently supported. * The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.** * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message will communicate the date of eligibility.
Before creating your first customer-managed key volume, you must have set up:
* The key vault must have soft delete and purge protection enabled. * The key must be of type RSA. * The key vault must have an [Azure Private Endpoint](../private-link/private-endpoint-overview.md).
+ * You need a private endpoint in each VNet you intend on using for Azure NetApp Files volumes
* The private endpoint must reside in a different subnet than the one delegated to Azure NetApp Files. The subnet must be in the same VNet as the one delegated to Azure NetApp.
+ * The network security group on the Azure NetApp Files delegated subnet must allow incoming traffic from the subnet where the VM mounting Azure NetApp Files volumes is located.
+ * The network security group on the Azure NetApp Files delegated subnet must also allow outgoing traffic to the subnet where the private endpoint is located.
For more information about Azure Key Vault and Azure Private Endpoint, refer to: * [Quickstart: Create a key vault ](../key-vault/general/quick-create-portal.md)
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
* `Microsoft.KeyVault/vaults/keys/decrypt/action` The user-assigned identity you select is added to your NetApp account. Due to the customizable nature of role-based access control (RBAC), the Azure portal doesn't configure access to the key vault. See [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../key-vault/general/rbac-guide.md) for details on configuring Azure Key Vault.
-1. After selecting **Save** button, you'll receive a notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
+1. After selecting the **Save** button, you'll receive a notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
## Use role-based access control
azure-resource-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/cli-samples.md
- Title: Azure CLI samples
-description: Provides Azure CLI sample scripts to use when working with Azure Managed Applications.
-- Previously updated : 10/25/2017---
-# Azure CLI Samples for Azure Managed Applications
-
-The following table includes links to a sample CLI script for Azure Managed Applications.
-
-| Create managed application | Description |
-| -- | -- |
-| [Define and create a managed application](scripts/managed-application-define-create-cli-sample.md) | Creates a managed application definition in the service catalog and then deploys the managed application from the service catalog. |
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
Last updated 03/21/2023+ # Quickstart: Deploy a service catalog managed application
azure-resource-manager Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/powershell-samples.md
- Title: Azure PowerShell samples
-description: Provides Azure PowerShell sample scripts to use when working with Azure Managed Applications.
--- Previously updated : 10/27/2017--
-# Azure PowerShell samples
-
-The following table includes links to scripts for Azure Managed Applications that use the Azure PowerShell.
-
-| Create managed application | Description |
-| -- | -- |
-| [Create managed application definition](scripts/managed-application-powershell-sample-create-definition.md) | Creates a managed application definition in the service catalog. |
-| [Deploy managed application](scripts/managed-application-poweshell-sample-create-application.md) | Deploys a managed application from the service catalog. |
-|**Update managed resource group**| **Description** |
-| [Get resources in managed resource group and resize VMs](scripts/managed-application-powershell-sample-get-managed-group-resize-vm.md) | Gets resources from the managed resource group, and resizes the VMs. |
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
description: Describes how to create and publish an Azure Managed Application in
-+ Last updated 03/21/2023
azure-resource-manager Publish Service Catalog Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-bring-your-own-storage.md
description: Describes how to bring your own storage to create and publish an Az
-+ Last updated 03/21/2023
azure-resource-manager Managed Application Define Create Cli Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-define-create-cli-sample.md
- Title: Create managed application definition - Azure CLI
-description: Provides an Azure CLI script sample that publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.
-- Previously updated : 03/07/2022----
-# Create a managed application definition to service catalog and deploy managed application from service catalog with Azure CLI
-
-This script publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.
---
-## Sample script
--
-### Run the script
--
-## Clean up resources
--
-```azurecli
-az group delete --name $appResourceGroup -y
-az group delete --name $appDefinitionResourceGroup -y
-```
-
-## Sample reference
-
-This script uses the following command to create the managed application definition. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az managedapp definition create](/cli/azure/managedapp/definition#az-managedapp-definition-create) | Create a managed application definition. Provide the package that contains the required files. |
-
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-resource-manager Managed Application Powershell Sample Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-powershell-sample-create-definition.md
- Title: Create managed application definition - Azure PowerShell
-description: Provides an Azure PowerShell script sample that creates a managed application definition in the Azure subscription.
--- Previously updated : 10/27/2017---
-# Create a managed application definition with PowerShell
-
-This script publishes a managed application definition to a service catalog.
--
-## Sample script
-
-[!code-powershell[main](../../../../powershell_scripts/managed-applications/create-definition/create-definition.ps1 "Create definition")]
--
-## Script explanation
-
-This script uses the following command to create the managed application definition. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzManagedApplicationDefinition](/powershell/module/az.resources/new-azmanagedapplicationdefinition) | Create a managed application definition. Provide the package that contains the required files. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on PowerShell, see [Azure PowerShell documentation](/powershell/azure/get-started-azureps).
azure-resource-manager Managed Application Powershell Sample Get Managed Group Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-powershell-sample-get-managed-group-resize-vm.md
- Title: Get managed resource group & resize VMs - Azure PowerShell
-description: Provides Azure PowerShell sample script that gets a managed resource group for an Azure Managed Application. The script resizes VMs.
--- Previously updated : 10/27/2017---
-# Get resources in a managed resource group and resize VMs with PowerShell
-
-This script retrieves resources from a managed resource group, and resizes the VMs in that resource group.
--
-## Sample script
-
-[!code-powershell[main](../../../../powershell_scripts/managed-applications/get-application/get-application.ps1 "Get application")]
--
-## Script explanation
-
-This script uses the following commands to deploy the managed application. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [Get-AzManagedApplication](/powershell/module/az.resources/get-azmanagedapplication) | List managed applications. Provide resource group name to focus the results. |
-| [Get-AzResource](/powershell/module/az.resources/get-azresource) | List resources. Provide a resource group and resource type to focus the result. |
-| [Update-AzVM](/powershell/module/az.compute/update-azvm) | Update a virtual machine's size. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on PowerShell, see [Azure PowerShell documentation](/powershell/azure/get-started-azureps).
azure-resource-manager Managed Application Poweshell Sample Create Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-poweshell-sample-create-application.md
- Title: Azure PowerShell script sample - Deploy a managed application
-description: Provides Azure PowerShell sample script sample that deploys a managed application definition to the subscription.
--- Previously updated : 10/27/2017---
-# Deploy a managed application for a service catalog with PowerShell
-
-This script deploys a managed application definition from the service catalog.
---
-## Sample script
-
-[!code-powershell[main](../../../../powershell_scripts/managed-applications/create-application/create-application.ps1 "Create application")]
--
-## Script explanation
-
-This script uses the following command to deploy the managed application. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzManagedApplication](/powershell/module/az.resources/new-azmanagedapplication) | Create a managed application. Provide the definition ID and parameters for the template. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on PowerShell, see [Azure PowerShell documentation](/powershell/azure/get-started-azureps).
azure-resource-manager Manage Resource Groups Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-cli.md
Title: Manage resource groups - Azure CLI
description: Use Azure CLI to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. Previously updated : 09/10/2021- Last updated : 03/31/2023
Learn how to use Azure CLI with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using Azure CLI](manage-resources-cli.md).
+## Prerequisites
+
+* Azure CLI. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+* After installing, sign in for the first time. For more information, see [How to sign into the Azure CLI](/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli).
+ ## What is a resource group A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to add resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
For more information about how Azure Resource Manager orders the deletion of res
You can deploy Azure resources by using Azure CLI, or by deploying an Azure Resource Manager (ARM) template or Bicep file.
+### Deploy resources by using storage operations
+ The following example creates a storage account. The name you provide for the storage account must be unique across Azure. ```azurecli-interactive az storage account create --resource-group exampleGroup --name examplestore --location westus --sku Standard_LRS --kind StorageV2 ```
+### Deploy resources by using an ARM template or Bicep file
+ To deploy an ARM template or Bicep file, use [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create). ```azurecli-interactive az deployment group create --resource-group exampleGroup --template-file storage.bicep ```
+The following example shows the Bicep file named `storage.bicep` that you're deploying:
+
+```bicep
+@minLength(3)
+@maxLength(11)
+param storagePrefix string
+
+var uniqueStorageName = concat(storagePrefix, uniqueString(resourceGroup().id))
+
+resource uniqueStorage 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: uniqueStorageName
+ location: 'eastus'
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ supportsHttpsTrafficOnly: true
+ }
+}
+```
+ For more information about deploying an ARM template, see [Deploy resources with Resource Manager templates and Azure CLI](../templates/deploy-cli.md). For more information about deploying a Bicep file, see [Deploy resources with Bicep and Azure CLI](../bicep/deploy-cli.md).
azure-resource-manager Manage Resource Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-powershell.md
Title: Manage resource groups - Azure PowerShell
description: Use Azure PowerShell to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. Previously updated : 09/10/2021- Last updated : 03/31/2023
Learn how to use Azure PowerShell with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using Azure PowerShell](manage-resources-powershell.md).
+## Prerequisites
+
+* Azure PowerShell. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+
+* After installing, sign in for the first time. For more information, see [Sign in](/powershell/azure/install-az-ps#sign-in).
+ ## What is a resource group A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to add resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
For more information about how Azure Resource Manager orders the deletion of res
You can deploy Azure resources by using Azure PowerShell, or by deploying an Azure Resource Manager (ARM) template or Bicep file.
+### Deploy resources by using storage operations
+ The following example creates a storage account. The name you provide for the storage account must be unique across Azure. ```azurepowershell-interactive New-AzStorageAccount -ResourceGroupName exampleGroup -Name examplestore -Location westus -SkuName "Standard_LRS" ```
+### Deploy resources by using an ARM template or Bicep file
+ To deploy an ARM template or Bicep file, use [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment). ```azurepowershell-interactive New-AzResourceGroupDeployment -ResourceGroupName exampleGroup -TemplateFile storage.bicep ```
+The following example shows the Bicep file named `storage.bicep` that you're deploying:
+
+```bicep
+@minLength(3)
+@maxLength(11)
+param storagePrefix string
+
+var uniqueStorageName = concat(storagePrefix, uniqueString(resourceGroup().id))
+
+resource uniqueStorage 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: uniqueStorageName
+ location: 'eastus'
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ supportsHttpsTrafficOnly: true
+ }
+}
+```
+ For more information about deploying an ARM template, see [Deploy resources with ARM templates and Azure PowerShell](../templates/deploy-powershell.md). For more information about deploying a Bicep file, see [Deploy resources with Bicep and Azure PowerShell](../bicep/deploy-powershell.md).
To get the locks for a resource group, use [Get-AzResourceLock](/powershell/modu
Get-AzResourceLock -ResourceGroupName exampleGroup ```
+To delete a lock, use [Remove-AzResourceLock](/powershell/module/az.resources/remove-azresourcelock).
+
+```azurepowershell-interactive
+$lockId = (Get-AzResourceLock -ResourceGroupName exampleGroup).LockId
+Remove-AzResourceLock -LockId $lockId
+```
+ For more information, see [Lock resources with Azure Resource Manager](lock-resources.md). ## Tag resource groups
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
description: An overview of connection string in Azure SignalR Service, how to g
Previously updated : 03/25/2022 Last updated : 03/29/2023 # Connection string in Azure SignalR Service
-Connection string is an important concept that contains information about how to connect to SignalR service. In this article, you'll learn the basics of connection string and how to configure it in your application.
+A connection string contains information about how to connect to Azure Signal Service (ASRS). In this article, you learn the basics of connection string and how to configure it in your application.
-## What is connection string
+## What is a connection string
-When an application needs to connect to Azure SignalR Service, it will need the following information:
+When an application needs to connect to Azure SignalR Service, it needs the following information:
-- The HTTP endpoint of the SignalR service instance-- How to authenticate with the service endpoint
+- The HTTP endpoint of the SignalR service instance.
+- The way to authenticate with the service endpoint.
-Connection string contains such information.
+A connection string contains such information.
-## What connection string looks like
+## What a connection string looks like
-A connection string consists of a series of key/value pairs separated by semicolons(;) and we use an equal sign(=) to connect each key and its value. Keys aren't case sensitive.
+A connection string consists of a series of key/value pairs separated by semicolons(;). An equal sign(=) to connect each key and its value. Keys aren't case sensitive.
For example, a typical connection string may look like this:
-```
-Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
-```
+> Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
-You can see in the connection string, there are two main information:
+The connection string contains:
-- `Endpoint=https://<resource_name>.service.signalr.net` is the endpoint URL of the resource-- `AccessKey=<access_key>` is the key to authenticate with the service. When access key is specified in connection string, SignalR service SDK will use it to generate a token that can be validated by the service.
+- `Endpoint=https://<resource_name>.service.signalr.net`: The endpoint URL of the resource.
+- `AccessKey=<access_key>`: The key to authenticate with the service. When an access key is specified in the connection string, the SignalR Service SDK uses it to generate a token that is validated by the service.
+- `Version`: The version of the connection string. The default value is `1.0`.
The following table lists all the valid names for key/value pairs in the connection string.
-| key | Description | Required | Default value | Example value |
-| -- | -- | -- | -- | |
-| Endpoint | The URI of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` |
-| Port | The port that your ASRS instance is listening on. | N | 80/443, depends on endpoint uri schema | 8080 |
-| Version | The version of given connection string. | N | 1.0 | 1.0 |
-| ClientEndpoint | The URI of your reverse proxy, like App Gateway or API Management | N | null | `https://foo.bar` |
-| AuthType | The auth type, we'll use AccessKey to authorize requests by default. **Case insensitive** | N | null | azure, azure.msi, azure.app |
+| Key | Description | Required | Default value| Example value
+| | | | | |
+| Endpoint | The URL of your ASRS instance. | Y | N/A |`https://foo.service.signalr.net` |
+| Port | The port that your ASRS instance is listening on. on. | N| 80/443, depends on the endpoint URI schema | 8080|
+| Version| The version of given connection. string. | N| 1.0 | 1.0 |
+| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N| null | `https://foo.bar` |
+| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app |
### Use AccessKey
-Local auth method will be used when `AuthType` is set to null.
+The local auth method is used when `AuthType` is set to null.
-| key | Description | Required | Default value | Example value |
-| | - | -- | - | - |
-| AccessKey | The key string in base64 format for building access token usage. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
+| Key | Description| Required | Default value | Example value|
+| | | | | |
+| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
### Use Azure Active Directory
-Azure AD auth method will be used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
+The Azure AD auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
-| key | Description | Required | Default value | Example value |
+| Key| Description| Required | Default value | Example value|
| -- | | -- | - | |
-| ClientId | A guid represents an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` |
-| TenantId | A guid represents an organization in Azure Active Directory. | N | null | `00000000-0000-0000-0000-000000000000` |
-| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` |
-| ClientCertPath | The absolute path of a cert file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` |
+| ClientId | A GUID of an Azure application or an Azure identity. | N| null| `00000000-0000-0000-0000-000000000000` |
+| TenantId | A GUID of an organization in Azure Active Directory. | N| null| `00000000-0000-0000-0000-000000000000` |
+| ClientSecret | The password of an Azure application instance. | N| null| `***********************.****************` |
+| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N| null| `/usr/local/cert/app.cert` |
-Different `TokenCredential` will be used to generate Azure AD tokens with the respect of params you have given.
+A different `TokenCredential` is used to generate Azure AD tokens depending on the parameters you have given.
- `type=azure`
- [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) will be used.
+ [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) is used.
- ```
+ ```text
Endpoint=xxx;AuthType=azure ``` - `type=azure.msi`
- 1. User-assigned managed identity will be used if `clientId` has been given in connection string.
+ 1. A user-assigned managed identity is used if `clientId` has been given in connection string.
```
- Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000
+ Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id>
```
- - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) will be used.
+ - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) is used.
- 2. Otherwise system-assigned managed identity will be used.
+ 1. A system-assigned managed identity is used.
- ```
+ ```text
Endpoint=xxx;AuthType=azure.msi; ```
- - [ManagedIdentityCredential()](/dotnet/api/azure.identity.managedidentitycredential) will be used.
-
+ - [ManagedIdentityCredential()](/dotnet/api/azure.identity.managedidentitycredential) is used.
- `type=azure.app` `clientId` and `tenantId` are required to use [Azure AD application with service principal](../active-directory/develop/howto-create-service-principal-portal.md).
- 1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) will be used if `clientSecret` is given.
- ```
- Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000;TenantId=00000000-0000-0000-0000-000000000000;clientScret=******
- ```
+ 1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) is used if `clientSecret` is given.
- 2. [ClientCertificateCredential(clientId, tenantId, clientCertPath)](/dotnet/api/azure.identity.clientcertificatecredential) will be used if `clientCertPath` is given.
+ ```text
+ Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id>;clientSecret=<client_secret>>
```
- Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000;TenantId=00000000-0000-0000-0000-000000000000;clientCertPath=/path/to/cert
+
+ 1. [ClientCertificateCredential(clientId, tenantId, clientCertPath)](/dotnet/api/azure.identity.clientcertificatecredential) is used if `clientCertPath` is given.
+
+ ```text
+ Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id>;TenantId=<tenant_id>;clientCertPath=</path/to/cert>
```
-## How to get my connection strings
+## How to get connection strings
### From Azure portal Open your SignalR service resource in Azure portal and go to `Keys` tab.
-You'll see two connection strings (primary and secondary) in the following format:
+You see two connection strings (primary and secondary) in the following format:
> Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
You can also use Azure CLI to get the connection string:
az signalr key list -g <resource_group> -n <resource_name> ```
-### For using Azure AD application
+## Connect with an Azure AD application
-You can use [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
+You can use an [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
-To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string will look as follows:
+To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string looks as follows:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.app;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0; ```
-For more information about how to authenticate using Azure AD application, see this [article](signalr-howto-authorize-application.md).
+For more information about how to authenticate using Azure AD application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md).
-### For using Managed identity
+## Authenticate with Managed identity
-You can also use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
+You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
-There are two types of managed identities, to use system assigned identity, you just need to add `AuthType=azure.msi` to the connection string:
+To use a system assigned identity, add `AuthType=azure.msi` to the connection string:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;Version=1.0; ```
-SignalR service SDK will automatically use the identity of your app server.
+The SignalR service SDK automatically uses the identity of your app server.
-To use user assigned identity, you also need to specify the client ID of the managed identity:
+To use a user assigned identity, include the client ID of the managed identity in the connection string:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;ClientId=<client_id>;Version=1.0; ```
-For more information about how to configure managed identity, see this [article](signalr-howto-authorize-managed-identity.md).
+For more information about how to configure managed identity, see [Authorize from Managed Identity](signalr-howto-authorize-managed-identity.md).
> [!NOTE]
-> It's highly recommended to use Azure AD to authenticate with SignalR service as it's a more secure way comparing to using access key. If you don't use access key authentication at all, consider to completely disable it (go to Azure portal -> Keys -> Access Key -> Disable). If you still use access key, it's highly recommended to rotate them regularly (more information can be found [here](signalr-howto-key-rotation.md)).
-
-### Use connection string generator
+> It's highly recommended to use managed identity to authenticate with SignalR service as it's a more secure way compared to using access keys. If you don't use access keys authentication, consider completely disabling it (go to Azure portal -> Keys -> Access Key -> Disable). If you still use access keys, it's highly recommended to rotate them regularly. For more information, see [Rotate access keys for Azure SignalR Service](signalr-howto-key-rotation.md).
-It may be cumbersome and error-prone to build connection strings manually.
+### Use the connection string generator
-To avoid making mistakes, we built a tool to help you generate connection string with Azure AD identities like `clientId`, `tenantId`, etc.
-
-To use connection string generator, open your SignalR resource in Azure portal, go to `Connection strings` tab:
+It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Azure AD identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu.
:::image type="content" source="media/concept-connection-string/generator.png" alt-text="Screenshot showing connection string generator of SignalR service in Azure portal.":::
-In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string will be automatically generated. You can copy and use it in your application.
+In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application.
> [!NOTE]
-> Everything you input on this page won't be saved after you leave the page (since they're only client side information), so please copy and save it in a secure place for your application to use.
+> Information you enter won't be saved after you leave the page. You will need to copy and save your connection string to use in your application.
-> [!NOTE]
-> For more information about how access tokens are generated and validated, see this [article](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md#authenticate-via-azure-signalr-service-accesskey).
+For more information about how access tokens are generated and validated, see [Authenticate via Azure Active Directory Token](signalr-reference-data-plane-rest-api.md#authenticate-via-azure-active-directory-token-azure-ad-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) .
## Client and server endpoints
-Connection string contains the HTTP endpoint for app server to connect to SignalR service. This is also the endpoint server will return to clients in negotiate response, so client can also connect to the service.
+A connection string contains the HTTP endpoint for app server to connect to SignalR service. The server returns the HTTP endpoint to the clients in a negotiate response, so the client can connect to the service.
-But in some applications there may be an extra component in front of SignalR service and all client connections need to go through that component first (to gain extra benefits like network security, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides such functionality).
+In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security.
-In such case, the client will need to connect to an endpoint different than SignalR service. Instead of manually replace the endpoint at client side, you can add `ClientEndpoint` to connecting string:
+In such case, the client needs to connect to an endpoint different than SignalR service. Instead of manually replacing the endpoint at the client side, you can add `ClientEndpoint` to connection string:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ClientEndpoint=https://<url_to_app_gateway>;Version=1.0; ```
-Then app server will return the right endpoint url in negotiate response for client to connect.
-
-> [!NOTE]
-> For more information about how clients get service url through negotiate, see this [article](signalr-concept-internals.md#client-connections).
+The app server returns a response to the client's negotiate request containing the correct endpoint URL for the client to connect to. For more information about client connections, see [Azure SignalR Service internals](signalr-concept-internals.md#client-connections).
-Similarly, when server wants to make [server connections](signalr-concept-internals.md#server-connections) or call [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to service, SignalR service may also be behind another service like Application Gateway. In that case, you can use `ServerEndpoint` to specify the actual endpoint for server connections and REST APIs:
+Similarly, the server wants to make [server connections](signalr-concept-internals.md#azure-signalr-service-internals) or call [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to the service, the SignalR service may also be behind another service like [Azure Application Gateway](../application-gateway/overview.md). In that case, you can use `ServerEndpoint` to specify the actual endpoint for server connections and REST APIs:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ServerEndpoint=https://<url_to_app_gateway>;Version=1.0; ``` ## Configure connection string in your application
-There are two ways to configure connection string in your application.
+There are two ways to configure a connection string in your application.
You can set the connection string when calling `AddAzureSignalR()` API:
You can set the connection string when calling `AddAzureSignalR()` API:
services.AddSignalR().AddAzureSignalR("<connection_string>"); ```
-Or you can call `AddAzureSignalR()` without any arguments, then service SDK will read the connection string from a config named `Azure:SignalR:ConnectionString` in your [config providers](/dotnet/core/extensions/configuration-providers).
+Or you can call `AddAzureSignalR()` without any arguments. The service SDK returns the connection string from a config named `Azure:SignalR:ConnectionString` in your [configuration provider](/dotnet/core/extensions/configuration-providers).
-In a local development environment, the config is stored in file (appsettings.json or secrets.json) or environment variables, so you can use one of the following ways to configure connection string:
+In a local development environment, the configuration is stored in a file (*appsettings.json* or *secrets.json*) or environment variables. You can use one of the following ways to configure connection string:
- Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)-- Set connection string to environment variable named `Azure__SignalR__ConnectionString` (colon needs to replaced with double underscore in [environment variable config provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider)).
+- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).
-In production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up config provider for those services.
+In a production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up configuration provider for those services.
> [!NOTE]
-> Even you're directly setting connection string using code, it's not recommended to hardcode the connection string in source code, so you should still first read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
+> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
### Configure multiple connection strings
-Azure SignalR Service also allows server to connect to multiple service endpoints at the same time, so it can handle more connections, which are beyond one service instance's limit. Also if one service instance is down, other service instances can be used as backup. For more information about how to use multiple instances, see this [article](signalr-howto-scale-multi-instances.md).
+Azure SignalR Service also allows the server to connect to multiple service endpoints at the same time, so it can handle more connections that are beyond a service instance's limit. Also, when one service instance is down the other service instances can be used as backup. For more information about how to use multiple instances, see [Scale SignalR Service with multiple instances](signalr-howto-scale-multi-instances.md).
There are also two ways to configure multiple instances: -- Through code
+- Through code:
```cs services.AddSignalR().AddAzureSignalR(options =>
There are also two ways to configure multiple instances:
You can assign a name and type to each service endpoint so you can distinguish them later. -- Through config
+- Through configuration:
- You can use any supported config provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
+ You can use any supported configuration provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
```bash dotnet user-secrets set Azure:SignalR:ConnectionString:name_a <connection_string_1>
There are also two ways to configure multiple instances:
dotnet user-secrets set Azure:SignalR:ConnectionString:name_c:secondary <connection_string_3> ```
- You can also assign name and type to each endpoint, by using a different config name in the following format:
+ You can assign a name and type to each endpoint by using a different config name in the following format:
- ```
+ ```text
Azure:SignalR:ConnectionString:<name>:<type> ```
azure-signalr Howto Network Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-network-access-control.md
Previously updated : 05/06/2020 Last updated : 03/29/2023 # Configure network access control
-Azure SignalR Service enables you to secure and control the level of access to your service endpoint, based on the request type and subset of networks used. When network rules are configured, only applications requesting data over the specified set of networks can access your Azure SignalR Service.
+Azure SignalR Service enables you to secure and control the level of access to your service endpoint based on the request type and subset of networks. When network rules are configured, only applications requesting data over the specified set of networks can access your SignalR Service.
-Azure SignalR Service has a public endpoint that is accessible through the internet. You can also create [Private Endpoints for your Azure SignalR Service](howto-private-endpoints.md). Private Endpoint assigns a private IP address from your VNet to the Azure SignalR Service, and secures all traffic between your VNet and the Azure SignalR Service over a private link. The Azure SignalR Service network access control provides access control for both public endpoint and private endpoints.
+SignalR Service has a public endpoint that is accessible through the internet. You can also create [private endpoints for your Azure SignalR Service](howto-private-endpoints.md). A private endpoint assigns a private IP address from your VNet to the SignalR Service, and secures all traffic between your VNet and the SignalR Service over a private link. The SignalR Service network access control provides access control for both public and private endpoints.
-Optionally, you can choose to allow or deny certain types of requests for public endpoint and each private endpoint. For example, you can block all [Server Connections](signalr-concept-internals.md#server-connections) from public endpoint and make sure they only originate from a specific VNet.
+Optionally, you can choose to allow or deny certain types of requests for the public endpoint and each private endpoint. For example, you can block all [Server Connections](signalr-concept-internals.md#application-server-connections) from public endpoint and make sure they only originate from a specific VNet.
-An application that accesses an Azure SignalR Service when network access control rules are in effect still requires proper authorization for the request.
+An application that accesses a SignalR Service when network access control rules are in effect still requires proper authorization for the request.
## Scenario A - No public traffic
-To completely deny all public traffic, you should first configure the public network rule to allow no request type. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications.
+To completely deny all public traffic, first configure the public network rule to allow no request type. Then, you can configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications.
## Scenario B - Only client connections from public network
-In this scenario, you can configure the public network rule to only allow [Client Connections](signalr-concept-internals.md#client-connections) from public network. You can then configure private network rules to allow other types of requests originating from a specific VNet. This configuration hides your app servers from public network and establishes secure connections between your app servers and Azure SignalR Service.
+In this scenario, you can configure the public network rule to only allow [Client Connections](signalr-concept-internals.md#client-connections) from the public network. You can then configure private network rules to allow other types of requests originating from a specific VNet. This configuration hides your app servers from the public network and establishes secure connections between your app servers and SignalR Service.
## Managing network access control
-You can manage network access control for Azure SignalR Service through the Azure portal.
+You can manage network access control for SignalR Service through the Azure portal.
-### Azure portal
-
-1. Go to the Azure SignalR Service you want to secure.
-
-1. Click on the settings menu called **Network access control**.
+1. Go to the SignalR Service instance you want to secure.
+1. Select **Network access control** from the left side menu.
![Network ACL on portal](media/howto-network-access-control/portal.png) 1. To edit default action, toggle the **Allow/Deny** button. > [!TIP]
- > Default action is the action we take when there is no ACL rule matches. For example, if the default action is **Deny**, then request types that are not explicitly approved below will be denied.
+ > The default action is the action the service takes when no access control rule matches a request. For example, if the default action is **Deny**, then the request types that are not explicitly approved will be denied.
1. To edit public network rule, select allowed types of requests under **Public network**.
You can manage network access control for Azure SignalR Service through the Azur
![Edit private endpoint ACL on portal ](media/howto-network-access-control/portal-private-endpoint.png)
-1. Click **Save** to apply your changes.
+1. Select **Save** to apply your changes.
## Next steps
azure-signalr Signalr Concept Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-internals.md
ms.devlang: csharp Previously updated : 11/13/2019 Last updated : 03/29/2023 # Azure SignalR Service internals Azure SignalR Service is built on top of ASP.NET Core SignalR framework. It also supports ASP.NET SignalR by reimplementing ASP.NET SignalR's data protocol on top of the ASP.NET Core framework.
-You can easily migrate a local ASP.NET Core SignalR application or ASP.NET SignalR application to work with SignalR Service, with a few lines of code change.
+You can easily migrate a local ASP.NET Core SignalR or an ASP.NET SignalR application to work with SignalR Service, with by changing few lines of code.
-The diagram below describes the typical architecture when you use the SignalR Service with your application server.
+The diagram describes the typical architecture when you use the SignalR Service with your application server.
The differences from self-hosted ASP.NET Core SignalR application are discussed as well. ![Architecture](./media/signalr-concept-internals/arch.png)
-## Server connections
+## Application server connections
-Self-hosted ASP.NET Core SignalR application server listens to and connects clients directly.
+A self-hosted ASP.NET Core SignalR application server listens to and connects clients directly.
-With SignalR Service, the application server is no longer accepting persistent client connections, instead:
+With SignalR Service, the application server no longer accepts persistent client connections, instead:
1. A `negotiate` endpoint is exposed by Azure SignalR Service SDK for each hub.
-1. This endpoint will respond to client's negotiation requests and redirect clients to SignalR Service.
-1. Eventually, clients will be connected to SignalR Service.
+1. The endpoint responds to client negotiation requests and redirect clients to SignalR Service.
+1. The clients connect to SignalR Service.
For more information, see [Client connections](#client-connections).
-Once the application server is started,
-- For ASP.NET Core SignalR, Azure SignalR Service SDK opens 5 WebSocket connections per hub to SignalR Service. -- For ASP.NET SignalR, Azure SignalR Service SDK opens 5 WebSocket connections per hub to SignalR Service, and one per application WebSocket connection.
+Once the application server is started:
-5 WebSocket connections is the default value that can be changed in [configuration](https://github.com/Azure/azure-signalr/blob/dev/docs/run-asp-net-core.md#connectioncount). Please note that this configures the initial server connection count the SDK starts. While the app server is connected to the SignalR service, the Azure SignalR service might send load-balancing messages to the server and the SDK will start new server connections to the service for better performance.
+- For ASP.NET Core SignalR: Azure SignalR Service SDK opens five WebSocket connections per hub to SignalR Service.
+- For ASP.NET SignalR: Azure SignalR Service SDK opens five WebSocket connections per hub to SignalR Service, and one per application WebSocket connection.
-Messages to and from clients will be multiplexed into these connections.
-These connections will remain connected to the SignalR Service all the time. If a server connection is disconnected for network issue,
-- all clients that are served by this server connection disconnect (for more information about it, see [Data transmit between client and server](#data-transmit-between-client-and-server));-- the server connection starts reconnecting automatically.
+The initial number of connections defaults to 5 and is configurable using the `InitialHubServerConnectionCount` option in the SignalR Service SDK. For more information, see [configuration](https://github.com/Azure/azure-signalr/blob/dev/docs/run-asp-net-core.md#maxhubserverconnectioncount).
+
+While the application server is connected to the SignalR service, the Azure SignalR service may send load-balancing messages to the server. Then, the SDK starts new server connections to the service for better performance. Messages to and from clients are multiplexed into these connections.
+
+Server connections are persistently connected to the SignalR Service. If a server connection is disconnected due to a network issue:
+
+- All clients served by this server connection disconnect. For more information, see [Data transmission between client and server](#data-transmission-between-client-and-server).
+- The server automatically reconnects the clients.
## Client connections
-When you use the SignalR Service, clients connect to SignalR Service instead of application server.
-There are two steps to establish persistent connections between the client and the SignalR Service.
+When you use the SignalR Service, clients connect to the service instead of the application server.
+There are three steps to establish persistent connections between the client and the SignalR Service.
-1. Client sends a negotiate request to the application server. With Azure SignalR Service SDK, application server returns a redirect response with SignalR Service's URL and access token.
+1. A client sends a negotiate request to the application server.
+1. The application server uses Azure SignalR Service SDK to return a redirect response containing the SignalR Service URL and access token.
- For ASP.NET Core SignalR, a typical redirect response looks like: ```
There are two steps to establish persistent connections between the client and t
} ```
-1. After receiving the redirect response, client uses the new URL and access token to start the normal process to connect to SignalR Service.
+1. After the client receives the redirect response, it uses the URL and access token to connect to SignalR Service.
+
+To learn more about ASP.NET Core SignalR's, see [Transport Protocols](https://github.com/aspnet/SignalR/blob/release/2.2/specs/TransportProtocols.md).
-Learn more about ASP.NET Core SignalR's [transport protocols](https://github.com/aspnet/SignalR/blob/release/2.2/specs/TransportProtocols.md).
+## Data transmission between client and server
-## Data transmit between client and server
+When a client is connected to the SignalR Service, the service runtime finds a server connection to serve this client.
-When a client is connected to the SignalR Service, service runtime will find a server connection to serve this client
-- This step happens only once, and is a one-to-one mapping between the client and server connections.
+- This step happens only once, and is a one-to-one mapping between the client and server connection.
- The mapping is maintained in SignalR Service until the client or server disconnects. At this point, the application server receives an event with information from the new client. A logical connection to the client is created in the application server. The data channel is established from client to application server, via SignalR Service.
-SignalR Service transmits data from the client to the pairing application server. And data from the application server will be sent to the mapped clients.
+SignalR Service transmits data from the client to the pairing application server. Data from the application server is sent to the mapped clients.
+
+SignalR Service doesn't save or store customer data, all customer data received is transmitted to the target server or clients in real-time.
+
+The Azure SignalR Service acts as a logical transport layer between application server and clients. All persistent connections are offloaded to SignalR Service. As a result, the application server only needs to handle the business logic in the hub class, without worrying about client connections.
+
+## Next steps
-SignalR Service does not save or store customer data, all customer data received is transmitted to target server or clients in real-time.
+To learn more about Azure SignalR SDKs, see:
-As you can see, the Azure SignalR Service is essentially a logical transport layer between application server and clients. All persistent connections are offloaded to SignalR Service.
-Application server only needs to handle the business logic in hub class, without worrying about client connections.
+- [ASP.NET Core SignalR](/aspnet/core/signalr/introduction)
+- [ASP.NET SignalR](/aspnet/signalr/overview/getting-started/introduction-to-signalr)
+- [ASP.NET code samples](https://github.com/aspnet/AzureSignalR-samples)
azure-signalr Signalr Concept Messages And Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-messages-and-connections.md
description: An overview of key concepts about messages and connections in Azure
Previously updated : 08/05/2020 Last updated : 03/23/2023 # Messages and connections in Azure SignalR Service
-The billing model for Azure SignalR Service is based on the number of connections and the number of messages. This article explains how messages and connections are defined and counted for billing.
+The billing model for Azure SignalR Service is based on the number of connections and the number of outbound messages from the service. This article explains how messages and connections are defined and counted for billing.
## Message formats
Azure SignalR Service supports the same formats as ASP.NET Core SignalR: [JSON](
The following limits apply for Azure SignalR Service messages: * Client messages:
- * For long polling or server side events, the client cannot send messages larger than 1MB.
- * There is no size limit for Websockets for service.
- * App server can set a limit for client message size. Default is 32KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
- * For serverless, the message size is limited by upstream implementation, but under 1MB is recommended.
+ * For long polling or server side events, the client can't send messages larger than 1 MB.
+ * There's no size limit for WebSocket for service.
+ * App server can set a limit for client message size. Default is 32 KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
+ * For serverless, the message size is limited by upstream implementation, but under 1 MB is recommended.
* Server messages:
- * There is no limit to server message size, but under 16MB is recommended.
- * App server can set a limit for client message size. Default is 32KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
+ * There's no limit to server message size, but under 16 MB is recommended.
+ * App server can set a limit for client message size. Default is 32 KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
* Serverless:
- * Rest API: 1MB for message body, 16KB for headers.
- * There is no limit for Websockets, [management SDK persistent mode](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md), but under 16MB is recommended.
+ * Rest API: 1 MB for message body, 16 KB for headers.
+ * There's no limit for WebSocket, [management SDK persistent mode](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md), but under 16 MB is recommended.
-For Websockets clients, large messages are split into smaller messages that are no more than 2 KB each and transmitted separately. SDKs handle message splitting and assembling. No developer efforts are needed.
+For WebSocket clients, large messages are split into smaller messages that are no more than 2 KB each and transmitted separately. SDKs handle message splitting and assembling. No developer efforts are needed.
Large messages do negatively affect messaging performance. Use smaller messages whenever possible, and test to determine the optimal message size for each use-case scenario. ## How messages are counted for billing
-For billing, only outbound messages from Azure SignalR Service are counted. Ping messages between clients and servers are ignored.
+Messages sent into the service are inbound messages and messages sent out of the service are outbound messages. Only outbound messages from Azure SignalR Service are counted for billing. Ping messages between clients and servers are ignored.
Messages larger than 2 KB are counted as multiple messages of 2 KB each. The message count chart in the Azure portal is updated every 100 messages per hub. For example, imagine you have one application server, and three clients:
-App server broadcasts a 1-KB message to all connected clients, the message from app server to the service is considered free inbound message. Only the three messages sending from service to each of the client are billed as outbound messages.
+* When the application server broadcasts a 1-KB message to all connected clients, the message from the application server to the service is considered a free inbound message.
-Client A sends a 1-KB message to another client B, without going through app server. The message from client A to service is free inbound message. The message from service to client B is billed as outbound message.
+* When *client A* sends a 1 KB inbound message to *client B*, without going through app server, the message is a free inbound message. The message routed from service to *client B* is billed as an outbound message.
-If you have three clients and one application server. One client sends a 4-KB message to let the server broadcast to all clients. The billed message count is eight: one message from the service to the application server and three messages from the service to the clients. Each message is counted as two 2-KB messages.
+* If you have three clients and one application server, when one client sends a 4-KB message for the server broadcast to all clients, the billed message count is eight:
-## How connections are counted
+ * One message from the service to the application server.
+ * Three messages from the service to the clients. Each message is counted as two 2-KB messages.
-There are server connections and client connections with Azure SignalR Service. By default, each application server starts with five initial connections per hub, and each client has one client connection.
+## How connections are counted
-For example, assume that you have two application servers and you define five hubs in code. The server connection count will be 50: 2 app servers * 5 hubs * 5 connections per hub.
+The Azure SignalR Service creates application server and client connections. By default, each application server starts with five initial connections per hub, and each client has one client connection.
-The connection count shown in the Azure portal includes server connections, client connections, diagnostic connections, and live trace connections. The connection types are defined in the following list:
+For example, assume that you have two application servers and you define five hubs in code. The server connection count is 50: (2 app servers * 5 hubs * 5 connections per hub).
-- **Server connection**: Connects Azure SignalR Service and the app server.-- **Client connection**: Connects Azure SignalR Service and the client app.-- **Diagnostic connection**: A special kind of client connection that can produce a more detailed log, which might affect performance. This kind of client is designed for troubleshooting.-- **Live trace connection**: Connects to the live trace endpoint and receives live traces of Azure SignalR Service.
-
-Note that a live trace connection isn't counted as a client connection or as a server connection.
+The connection count shown in the Azure portal includes server, client, diagnostic, and live trace connections. The connection types are defined in the following list:
-ASP.NET SignalR calculates server connections in a different way. It includes one default hub in addition to hubs that you define. By default, each application server needs five more initial server connections. The initial connection count for the default hub stays consistent with other hubs.
+* **Server connection**: Connects Azure SignalR Service and the app server.
+* **Client connection**: Connects Azure SignalR Service and the client app.
+* **Diagnostic connection**: A special type of client connection that can produce a more detailed log, which might affect performance. This kind of client is designed for troubleshooting.
+* **Live trace connection**: Connects to the live trace endpoint and receives live traces of Azure SignalR Service.
-The service and the application server keep syncing connection status and making adjustment to server connections to get better performance and service stability. So you might see server connection number changes from time to time.
+A live trace connection isn't counted as a client connection or as a server connection.
-## How inbound/outbound traffic is counted
+ASP.NET SignalR calculates server connections in a different way. It includes one default hub in addition to hubs that you define. By default, each application server needs five more initial server connections. The initial connection count for the default hub stays consistent with other hubs.
-Message sent into the service is inbound message. Message sent out of the service is outbound message. Traffic is calculated in bytes.
+The service and the application server keep syncing connection status and making adjustments to server connections to get better performance and service stability. So you may see changes in the number of server connections in your running service.
## Related resources -- [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr )-- [ASP.NET Core SignalR configuration](/aspnet/core/signalr/configuration)-- [JSON](https://www.json.org/)-- [MessagePack](/aspnet/core/signalr/messagepackhubprotocol)
+* [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr )
+* [ASP.NET Core SignalR configuration](/aspnet/core/signalr/configuration)
+* [JSON](https://www.json.org/)
+* [MessagePack](/aspnet/core/signalr/messagepackhubprotocol)
azure-signalr Signalr Concept Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-performance.md
description: An overview of the performance and benchmark of Azure SignalR Servi
Previously updated : 11/13/2019 Last updated : 03/23/2023 # Performance guide for Azure SignalR Service One of the key benefits of using Azure SignalR Service is the ease of scaling SignalR applications. In a large-scale scenario, performance is an important factor.
-In this guide, we'll introduce the factors that affect SignalR application performance. We'll describe typical performance in different use-case scenarios. In the end, we'll introduce the environment and tools that you can use to generate a performance report.
+This article describes:
+
+* The factors that affect SignalR application performance.
+* The typical performance in different use-case scenarios.
+* The environment and tools that you can use to generate a performance report.
## Quick evaluation using metrics
- Before going through the factors that impact the performance, let's first introduce an easy way to monitor the pressure of your service. There's a metrics called **Server Load** on the Portal.
-
- <kbd>![Screenshot of the Server Load metric of Azure SignalR on Portal. The metrics shows Server Load is at about 8 percent usage. ](./media/signalr-concept-performance/server-load.png "Server Load")</kbd>
+You can easily monitor your service in the Azure portal. From the **Metrics** page of your SignalR instance, you can select the **Server Load** metrics to see the "pressure" of your service.
+
+<kbd>![Screenshot of the Server Load metric of Azure SignalR on Portal. The metrics shows Server Load is at about 8 percent usage. ](./media/signalr-concept-performance/server-load.png "Server Load")</kbd>
- It shows the computing pressure of your SignalR service. You could test on your own scenario and check this metrics to decide whether to scale up. The latency inside SignalR service would remain low if the Server Load is below 70%.
+The chart shows the computing pressure of your SignalR service. You can test your scenario and check this metric to decide whether to scale up. The latency inside SignalR service remains low if the Server Load is below 70%.
> [!NOTE] > If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <100) or single connection, you need to check [sending to small group](#small-group) or [sending to connection](#send-to-connection) for reference. In those scenarios there is large routing cost which is not included in the Server Load.
-
- Below are detailed concepts for evaluating performance.
## Term definitions
In this guide, we'll introduce the factors that affect SignalR application perfo
*Bandwidth*: The total size of all messages in 1 second.
-*Default mode*: The default working mode when an Azure SignalR Service instance was created. Azure SignalR Service expects the app server to establish a connection with it before it accepts any client connections.
+*Default mode*: The default working mode when an Azure SignalR Service instance is created. Azure SignalR Service expects the app server to establish a connection with it before it accepts any client connections.
*Serverless mode*: A mode in which Azure SignalR Service accepts only client connections. No server connection is allowed.
In this guide, we'll introduce the factors that affect SignalR application perfo
Azure SignalR Service defines seven Standard tiers for different performance capacities. This guide answers the following questions: -- What is the typical Azure SignalR Service performance for each tier?
+* What is the typical Azure SignalR Service performance for each tier?
-- Does Azure SignalR Service meet my requirements for message throughput (for example, sending 100,000 messages per second)?
+* Does Azure SignalR Service meet my requirements for message throughput (for example, sending 100,000 messages per second)?
-- For my specific scenario, which tier is suitable for me? Or how can I select the proper tier?
+* For my specific scenario, which tier is suitable for me? Or how can I select the proper tier?
-- What kind of app server (VM size) is suitable for me? How many of them should I deploy?
+* What kind of app server (VM size) is suitable for me? How many of them should I deploy?
To answer these questions, this guide first gives a high-level explanation of the factors that affect performance. It then illustrates the maximum inbound and outbound messages for every tier for typical use cases: **echo**, **broadcast**, **send to group**, and **send to connection** (peer-to-peer chatting). This guide can't cover all scenarios (and different use cases, message sizes, message sending patterns, and so on). But it provides some methods to help you: -- Evaluate your approximate requirement for the inbound or outbound messages.-- Find the proper tiers by checking the performance table.
+* Evaluate your approximate requirement for the inbound or outbound messages.
+* Find the proper tiers by checking the performance table.
## Performance insight
This section describes the performance evaluation methodologies, and then lists
*Throughput* and *latency* are two typical aspects of performance checking. For Azure SignalR Service, each SKU tier has its own throughput throttling policy. The policy defines *the maximum allowed throughput (inbound and outbound bandwidth)* as the maximum achieved throughput when 99 percent of messages have latency that's less than 1 second.
-Latency is the time span from the connection sending the message to receiving the response message from Azure SignalR Service. Let's take **echo** as an example. Every client connection adds a time stamp in the message. The app server's hub sends the original message back to the client. So the propagation delay is easily calculated by every client connection. The time stamp is attached for every message in **broadcast**, **send to group**, and **send to connection**.
+Latency is the time span from the connection sending the message to receiving the response message from Azure SignalR Service. Take **echo** as an example. Every client connection adds a time stamp in the message. The app server's hub sends the original message back to the client. So the propagation delay is easily calculated by every client connection. The time stamp is attached for every message in **broadcast**, **send to group**, and **send to connection**.
To simulate thousands of concurrent client connections, multiple VMs are created in a virtual private network in Azure. All of these VMs connect to the same Azure SignalR Service instance.
In the default mode of Azure SignalR Service, app server VMs are deployed in the
### Performance factors
-Theoretically, Azure SignalR Service capacity is limited by computation resources: CPU, memory, and network. For example, more connections to Azure SignalR Service cause the service to use more memory. For larger message traffic (for example, every message is larger than 2,048 bytes), Azure SignalR Service needs to spend more CPU cycles to process traffic. Meanwhile, Azure network bandwidth also imposes a limit for maximum traffic.
-
-The transport type is another factor that affects performance. The three types are [WebSocket](https://en.wikipedia.org/wiki/WebSocket), [Server-Sent-Event](https://en.wikipedia.org/wiki/Server-sent_events), and [Long-Polling](https://en.wikipedia.org/wiki/Push_technology).
-
-WebSocket is a bidirectional and full-duplex communication protocol over a single TCP connection. Server-Sent-Event is a unidirectional protocol to push messages from server to client. Long-Polling requires the clients to periodically poll information from the server through an HTTP request. For the same API under the same conditions, WebSocket has the best performance, Server-Sent-Event is slower, and Long-Polling is the slowest. Azure SignalR Service recommends WebSocket by default.
+The following factors affect SignalR performance.
-The message routing cost also limits performance. Azure SignalR Service plays a role as a message router, which routes the message from a set of clients or servers to other clients or servers. A different scenario or API requires a different routing policy.
+* SKU tier (CPU/memory)
+* Number of connections
+* Message size
+* Message send rate
+* Transport type (WebSocket, Server-Sent-Event, or Long-Polling)
+* Use-case scenario (routing cost)
+* App server and service connections (in server mode)
-For **echo**, the client sends a message to itself, and the routing destination is also itself. This pattern has the lowest routing cost. But for **broadcast**, **send to group**, and **send to connection**, Azure SignalR Service needs to look up the target connections through the internal distributed data structure. This extra processing uses more CPU, memory, and network bandwidth. As a result, performance is slower.
+#### Computer resources
-In the default mode, the app server might also become a bottleneck for certain scenarios. The Azure SignalR SDK has to invoke the hub, while it maintains a live connection with every client through heartbeat signals.
+Theoretically, Azure SignalR Service capacity is limited by compute resources: CPU, memory, and network. For example, more connections to Azure SignalR Service cause the service to use more memory. For larger message traffic (for example, every message is larger than 2,048 bytes), Azure SignalR Service needs to spend more CPU cycles to process traffic. Meanwhile, Azure network bandwidth also imposes a limit for maximum traffic.
-In serverless mode, the client sends a message by HTTP post, which is not as efficient as WebSocket.
+#### Transport type
-Another factor is protocol: JSON and [MessagePack](https://msgpack.org/https://docsupdatetracker.net/index.html). MessagePack is smaller in size and delivered faster than JSON. MessagePack might not improve performance, though. The performance of Azure SignalR Service is not sensitive to protocols because it doesn't decode the message payload during message forwarding from clients to servers or vice versa.
+The transport type is another factor that affects performance. The three types are:
-In summary, the following factors affect the inbound and outbound capacity:
+* [WebSocket](https://en.wikipedia.org/wiki/WebSocket): WebSocket is a bidirectional and full-duplex communication protocol over a single TCP connection.
+* [Server-Sent-Event](https://en.wikipedia.org/wiki/Server-sent_events): Server-Sent-Event is a unidirectional protocol to push messages from server to client.
+* [Long-Polling](https://en.wikipedia.org/wiki/Push_technology): Long-Polling requires the clients to periodically poll information from the server through an HTTP request.
-- SKU tier (CPU/memory)
+For the same API under the same conditions, WebSocket has the best performance, Server-Sent-Event is slower, and Long-Polling is the slowest. Azure SignalR Service recommends WebSocket by default.
-- Number of connections
+#### Message routing cost
-- Message size
+The message routing cost also limits performance. Azure SignalR Service plays a role as a message router, which routes the message from a set of clients or servers to other clients or servers. A different scenario or API requires a different routing policy.
-- Message send rate
+For **echo**, the client sends a message to itself, and the routing destination is also itself. This pattern has the lowest routing cost. But for **broadcast**, **send to group**, and **send to connection**, Azure SignalR Service needs to look up the target connections through the internal distributed data structure. This extra processing uses more CPU, memory, and network bandwidth. As a result, performance is slower.
-- Transport type (WebSocket, Server-Sent-Event, or Long-Polling)
+In the default mode, the app server might also become a bottleneck for certain scenarios. The Azure SignalR SDK has to invoke the hub, while it maintains a live connection with every client through heartbeat signals.
-- Use-case scenario (routing cost)
+In serverless mode, the client sends a message by HTTP post, which isn't as efficient as WebSocket.
-- App server and service connections (in server mode)
+#### Protocol
+Another factor is protocol: JSON and [MessagePack](https://msgpack.org/https://docsupdatetracker.net/index.html). MessagePack is smaller in size and delivered faster than JSON. MessagePack might not improve performance, though. The performance of Azure SignalR Service isn't sensitive to protocols because it doesn't decode the message payload during message forwarding from clients to servers or vice versa.
### Finding a proper SKU How can you evaluate the inbound/outbound capacity or find which tier is suitable for a specific use case?
-Assume that the app server is powerful enough and is not the performance bottleneck. Then, check the maximum inbound and outbound bandwidth for every tier.
+Assume that the app server is powerful enough and isn't the performance bottleneck. Then, check the maximum inbound and outbound bandwidth for every tier.
#### Quick evaluation
-Let's simplify the evaluation first by assuming some default settings:
+For a quick evaluation, assume the following default settings:
-- The transport type is WebSocket.-- The message size is 2,048 bytes.-- A message is sent every 1 second.-- Azure SignalR Service is in the default mode.
+* The transport type is WebSocket.
+* The message size is 2,048 bytes.
+* A message is sent every 1 second.
+* Azure SignalR Service is in the default mode.
-Every tier has its own maximum inbound bandwidth and outbound bandwidth. A smooth user experience is not guaranteed after the inbound or outbound connection exceeds the limit.
+Every tier has its own maximum inbound bandwidth and outbound bandwidth. A smooth user experience isn't guaranteed after the inbound or outbound connection exceeds the limit.
**Echo** gives the maximum inbound bandwidth because it has the lowest routing cost. **Broadcast** defines the maximum outbound message bandwidth.
Do *not* exceed the highlighted values in the following two tables.
outboundBandwidth = outboundConnections * messageSize / sendInterval ``` -- *inboundConnections*: The number of connections sending the message.
+* *inboundConnections*: The number of connections sending the message.
-- *outboundConnections*: The number of connections receiving the message.
+* *outboundConnections*: The number of connections receiving the message.
-- *messageSize*: The size of a single message (average value). A small message that's less than 1,024 bytes has a performance impact that's similar to a 1,024-byte message.
+* *messageSize*: The size of a single message (average value). A small message that's less than 1,024 bytes has a performance impact that's similar to a 1,024-byte message.
-- *sendInterval*: The time of sending one message. Typically it's 1 second per message, which means sending one message every second. A smaller interval means sending more message in a time period. For example, 0.5 seconds per message means sending two messages every second.
+* *sendInterval*: The time of sending one message. Typically it's 1 second per message, which means sending one message every second. A smaller interval means sending more message in a time period. For example, 0.5 second per message means sending two messages every second.
-- *Connections*: The committed maximum threshold for Azure SignalR Service for every tier. If the connection number is increased further, it will suffer from connection throttling.
+* *Connections*: The committed maximum threshold for Azure SignalR Service for every tier. If the connection number is increased further, it suffers from connection throttling.
#### Evaluation for complex use cases ##### Bigger message size or different sending rate
-The real use case is more complicated. It might send a message larger than 2,048 bytes, or the sending message rate is not one message per second. Let's take Unit100's broadcast as an example to find how to evaluate its performance.
+The real use case is more complicated. It might send a message larger than 2,048 bytes, or the sending message rate isn't one message per second. Let's take Unit100's broadcast as an example to find how to evaluate its performance.
The following table shows a real use case of **broadcast**. But the message size, connection count, and message sending rate are different from what we assumed in the previous section. The question is how we can deduce any of those items (message size, connection count, or message sending rate) if we know only two of them.
Then pick up the proper tier from the maximum inbound/outbound bandwidth tables.
> [!NOTE] > For sending a message to hundreds or thousands of small groups, or for thousands of clients sending a message to each other, the routing cost will become dominant. Take this impact into account.
-For the use case of sending a message to clients, make sure that the app server is *not* the bottleneck. The following "Case study" section gives guidelines about how many app servers you need and how many server connections you should configure.
+For the use case of sending a message to clients, make sure that the app server isn't* the bottleneck. The following "Case study" section gives guidelines about how many app servers you need and how many server connections you should configure.
## Case study
Even for this simple hub, the traffic pressure on the app server is prominent as
#### Broadcast
-For **broadcast**, when the web app receives the message, it broadcasts to all clients. The more clients there are to broadcast, the more message traffic there is to all clients. See the following diagram.
+For **broadcast**, when the web app receives the message, it broadcasts to all clients. The more clients there are to broadcast, the more message traffic there's to all clients. See the following diagram.
![Traffic for the broadcast use case](./media/signalr-concept-performance/broadcast.png)
The **send to group** use case has a similar traffic pattern to **broadcast**. T
Group member and group count are two factors that affect performance. To simplify the analysis, we define two kinds of groups: -- **Small group**: Every group has 10 connections. The group number is equal to (max
+* **Small group**: Every group has 10 connections. The group number is equal to (max
connection count) / 10. For example, for Unit1, if there are 1,000 connection counts, then we have 1000 / 10 = 100 groups. -- **Big group**: The group number is always 10. The group member count is equal to (max
+* **Big group**: The group number is always 10. The group member count is equal to (max
connection count) / 10. For example, for Unit1, if there are 1,000 connection counts, then every group has 1000 / 10 = 100 members. **Send to group** brings a routing cost to Azure SignalR Service because it has to find the target connections through a distributed data structure. As the sending connections increase, the cost increases.
The following table gives the suggested web app count for ASP.NET SignalR **send
Clients and Azure SignalR Service are involved in serverless mode. Every client stands for a single connection. The client sends messages through the REST API to another client or broadcast messages to all.
-Sending high-density messages through the REST API is not as efficient as using WebSocket. It requires you to build a new HTTP connection every time, and that's an extra cost in serverless mode.
+Sending high-density messages through the REST API isn't as efficient as using WebSocket. It requires you to build a new HTTP connection every time, and that's an extra cost in serverless mode.
#### Broadcast through REST API
-All clients establish WebSocket connections with Azure SignalR Service. Then some clients start broadcasting through the REST API. The message sending (inbound) is all through HTTP Post, which is not efficient compared with WebSocket.
+All clients establish WebSocket connections with Azure SignalR Service. Then some clients start broadcasting through the REST API. The message sending (inbound) is all through HTTP Post, which isn't efficient compared with WebSocket.
| Broadcast through REST API | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | ||-|-|--|--|--|||
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
Title: Authorize request to SignalR resources with Azure AD from managed identities
+ Title: Authorize managed identity requests to a SignalR resource
description: This article provides information about authorizing request to SignalR resources with Azure AD from managed identities Previously updated : 07/18/2022 Last updated : 03/28/2023 ms.devlang: csharp
-# Authorize request to SignalR resources with Azure AD from managed identities
+# Authorize managed identity requests to a SignalR resource
-Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from Azure resources using [Managed identities for Azure resources
+Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from Azure resources using [managed identities for Azure resources
](../active-directory/managed-identities-azure-resources/overview.md). This article shows how to configure your SignalR resource and code to authorize a managed identity request to a SignalR resource.
This example shows you how to configure `System-assigned managed identity` on a
1. Select the **Save** button to confirm the change.
-To learn how to create user-assigned managed identities, see this article:
-- [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity)
+To learn how to create user-assigned managed identities, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity)
To learn more about configuring managed identities, see one of these articles:
The following steps describe how to assign a `SignalR App Server` role to a syst
1. Select your Azure subscription.
-1. Select **System-assigned managed identity**, search for a virtual machine to which would you'd like to assign the role, and then select it.
+1. Select **System-assigned managed identity**, search for a virtual machine to which you'd like to assign the role, and then select it.
1. On the **Review + assign** tab, select **Review + assign** to assign the role.
To learn more about how to assign and manage Azure role assignments, see these a
#### Using system-assigned identity
-You can use either [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential) or [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential) to configure your SignalR endpoints.
+You can use either [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential) or [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential) to configure your SignalR endpoints. However, the best practice is to use `ManagedIdentityCredential` directly.
-However, the best practice is to use `ManagedIdentityCredential` directly.
-
-The system-assigned managed identity will be used by default, but **make sure that you don't configure any environment variables** that the [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) preserved if you were using `DefaultAzureCredential`. Otherwise it will fall back to use `EnvironmentCredential` to make the request and it will result to a `Unauthorized` response in most cases.
+The system-assigned managed identity is used by default, but **make sure that you don't configure any environment variables** that the [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) preserved if you were using `DefaultAzureCredential`. Otherwise it falls back to use `EnvironmentCredential` to make the request and it results to a `Unauthorized` response in most cases.
```C# services.AddSignalR().AddAzureSignalR(option =>
You might need a group of key-value pairs to configure an identity. The keys of
#### Using system-assigned identity
-If you only configure the service URI, then the `DefaultAzureCredential` is used. This class is useful when you want to share the same configuration on Azure and local dev environment. To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential).
+If you only configure the service URI, then the `DefaultAzureCredential` is used. This class is useful when you want to share the same configuration on Azure and local development environments. To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential).
-On Azure portal, use the following example to configure a `DefaultAzureCredential`. If don't configure any [environment variables listed here](/dotnet/api/overview/azure/identity-readme#environment-variables), then the system-assigned identity will be used to authenticate.
+In the Azure portal, use the following example to configure a `DefaultAzureCredential`. If you don't configure any [environment variables listed here](/dotnet/api/overview/azure/identity-readme#environment-variables), then the system-assigned identity is used to authenticate.
``` <CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net ```
-Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` file. At the local scope there's no managed identity, and the authentication via Visual Studio, Azure CLI, and Azure PowerShell accounts will be attempted in order.
+Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` file. At the local scope there's no managed identity, and the authentication via Visual Studio, Azure CLI, and Azure PowerShell accounts are attempted in order.
```json { "Values": {
Here's a config sample of `DefaultAzureCredential` in the `local.settings.json`
} ```
-If you want to use system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), you should set the `credential` key with connection name prefix to `managedidentity`. Here's an application settings sample:
+If you want to use system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), you should set the `credential` key with the connection name prefix to `managedidentity`. Here's an application settings sample:
``` <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net
If you want to use system-assigned identity independently and without the influe
#### Using user-assigned identity
-If you want to use user-assigned identity, you need to assign one more `clientId` key with connection name prefix compared to system-assigned identity. Here's the application settings sample:
+If you want to use user-assigned identity, you need to assign `clientId`in addition to the `serviceUri` and `credential` keys with the connection name prefix. Here's the application settings sample:
+ ``` <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity <CONNECTION_NAME_PREFIX>__clientId = <CLIENT_ID> ```+ ## Next steps See the following related articles:
azure-signalr Signalr Howto Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-key-rotation.md
Title: How to rotate access key for Azure SignalR Service
+ Title: Rotate access keys for Azure SignalR Service
description: An overview on why the customer needs to routinely rotate the access keys and how to do it with the Azure portal GUI and the Azure CLI. Previously updated : 07/18/2022 Last updated : 03/29/2023
-# How to rotate access key for Azure SignalR Service
+# Rotate access keys for Azure SignalR Service
-Each Azure SignalR Service instance has a pair of access keys called Primary and Secondary keys. They're used to authenticate SignalR clients when requests are made to the service. The keys are associated with the instance endpoint URL. Keep your keys secure, and rotate them regularly. You're provided with two access keys so that you can maintain connections by using one key while regenerating the other.
+For security reasons and compliance requirements, it's important to routinely rotate your access keys. This article describes how to rotate access keys for Azure SignalR Service.
-## Why rotate access keys?
+Each Azure SignalR Service instance has a primary and a secondary key. They're used to authenticate SignalR clients when requests are made to the service. The keys are associated with the instance endpoint URL. Keep your keys secure, and rotate them regularly. You're provided with two access keys so that you can maintain connections by using one key while regenerating the other.
-For security reasons and compliance requirements, routinely rotate your access keys.
## Regenerate access keys
-1. Go to the [Azure portal](https://portal.azure.com/), and sign in with your credentials.
-
-1. Find the **Keys** section in the Azure SignalR Service instance with the keys that you want to regenerate.
-
-1. Select **Keys** on the navigation menu.
-
+1. Go to your SignalR instance in the [Azure portal](https://portal.azure.com/).
+1. Select **Keys** on the left side menu.
1. Select **Regenerate Primary Key** or **Regenerate Secondary Key**.
- A new key and corresponding connection string are created and displayed.
+A new key and corresponding connection string are created and displayed.
- ![Regenerate Keys](media/signalr-howto-key-rotation/regenerate-keys.png)
You also can regenerate keys by using the [Azure CLI](/cli/azure/signalr/key#az-signalr-key-renew). ## Update configurations with new connection strings 1. Copy the newly generated connection string.- 1. Update all configurations to use the new connection string.- 1. Restart the application as needed. ## Forced access key regeneration
-Azure SignalR Service might enforce a mandatory access key regeneration under certain situations. The service notifies customers via email and portal notification. If you receive this communication or encounter service failure due to an access key, rotate the keys by following the instructions in this guide.
+The Azure SignalR Service can enforce a mandatory access key regeneration under certain situations. The service notifies customers of mandatory key regeneration via email and portal notification. If you receive this communication or encounter service failure due to an access key, rotate the keys by following the instructions in this guide.
## Next steps
-Rotate your access keys regularly as a good security practice.
-
-In this guide, you learned how to regenerate access keys. Continue to the next tutorials about authentication with OAuth or with Azure Functions.
- > [!div class="nextstepaction"]
-> [Integrate with ASP.NET core identity](./signalr-concept-authenticate-oauth.md)
+> [Azure SignalR Service authentication](./signalr-concept-authenticate-oauth.md)
> [!div class="nextstepaction"] > [Build a serverless real-time app with authentication](./signalr-tutorial-authenticate-azure-functions.md)
azure-signalr Signalr Howto Scale Multi Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-scale-multi-instances.md
ms.devlang: csharp Previously updated : 07/18/2022 Last updated : 03/23/2023
-# How to scale SignalR Service with multiple instances?
+# Scale SignalR Service with multiple instances
+ SignalR Service SDK supports multiple endpoints for SignalR Service instances. You can use this feature to scale the concurrent connections, or use it for cross-region messaging. ## For ASP.NET Core
-### How to add multiple endpoints from config?
+### Add multiple endpoints from config
-Config with key `Azure:SignalR:ConnectionString` or `Azure:SignalR:ConnectionString:` for SignalR Service connection string.
+Configure with key `Azure:SignalR:ConnectionString` or `Azure:SignalR:ConnectionString:` for SignalR Service connection string.
-If the key starts with `Azure:SignalR:ConnectionString:`, it should be in format `Azure:SignalR:ConnectionString:{Name}:{EndpointType}`, where `Name` and `EndpointType` are properties of the `ServiceEndpoint` object, and are accessible from code.
+If the key starts with `Azure:SignalR:ConnectionString:`, it should be in the format `Azure:SignalR:ConnectionString:{Name}:{EndpointType}`, where `Name` and `EndpointType` are properties of the `ServiceEndpoint` object, and are accessible from code.
You can add multiple instance connection strings using the following `dotnet` commands:
dotnet user-secrets set Azure:SignalR:ConnectionString:east-region-b:primary <Co
dotnet user-secrets set Azure:SignalR:ConnectionString:backup:secondary <ConnectionString3> ```
-### How to add multiple endpoints from code?
+### Add multiple endpoints from code
-A `ServicEndpoint` class is introduced to describe the properties of an Azure SignalR Service endpoint.
+A `ServicEndpoint` class describes the properties of an Azure SignalR Service endpoint.
You can configure multiple instance endpoints when using Azure SignalR Service SDK through: ```cs services.AddSignalR()
services.AddSignalR()
}); ```
-### How to customize endpoint router?
+### Customize endpoint router
By default, the SDK uses the [DefaultEndpointRouter](https://github.com/Azure/azure-signalr/blob/dev/src/Microsoft.Azure.SignalR/EndpointRouters/DefaultEndpointRouter.cs) to pick up endpoints. #### Default behavior
-1. Client request routing
+
+1. Client request routing:
When client `/negotiate` with the app server. By default, SDK **randomly selects** one endpoint from the set of available service endpoints.
-2. Server message routing
+2. Server message routing:
- When sending a message to a specific *connection* and the target connection is routed to current server, the message goes directly to that connected endpoint. Otherwise, the messages are broadcasted to every Azure SignalR endpoint.
+ When sending a message to a specific *connection* and the target connection is routed to the current server, the message goes directly to that connected endpoint. Otherwise, the messages are broadcasted to every Azure SignalR endpoint.
#### Customize routing algorithm+ You can create your own router when you have special knowledge to identify which endpoints the messages should go to.
-A custom router is defined below as an example when groups starting with `east-` always go to the endpoint named `east`:
+The following example defines a custom router that routes messages with a group starting with `east-` to the endpoint named `east`:
```cs private class CustomRouter : EndpointRouterDecorator
private class CustomRouter : EndpointRouterDecorator
} ```
-Another example below, that overrides the default negotiate behavior, to select the endpoints depends on where the app server is located.
+The following example overrides the default negotiate behavior and selects the endpoint depending on the location of the app server.
```cs private class CustomRouter : EndpointRouterDecorator
-{
- public override ServiceEndpoint GetNegotiateEndpoint(HttpContext context, IEnumerable<ServiceEndpoint> endpoints)
+{ public override ServiceEndpoint GetNegotiateEndpoint(HttpContext context, IEnumerable<ServiceEndpoint> endpoints)
{ // Override the negotiate behavior to get the endpoint from query string var endpointName = context.Request.Query["endpoint"];
services.AddSignalR()
## For ASP.NET
-### How to add multiple endpoints from config?
+### Add multiple endpoints from config
-Config with key `Azure:SignalR:ConnectionString` or `Azure:SignalR:ConnectionString:` for SignalR Service connection string.
+Configuration with key `Azure:SignalR:ConnectionString` or `Azure:SignalR:ConnectionString:` for SignalR Service connection string.
If the key starts with `Azure:SignalR:ConnectionString:`, it should be in format `Azure:SignalR:ConnectionString:{Name}:{EndpointType}`, where `Name` and `EndpointType` are properties of the `ServiceEndpoint` object, and are accessible from code.
You can add multiple instance connection strings to `web.config`:
</configuration> ```
-### How to add multiple endpoints from code?
+### Add multiple endpoints from code
-A `ServicEndpoint` class is introduced to describe the properties of an Azure SignalR Service endpoint.
+A `ServiceEndpoint` class describes the properties of an Azure SignalR Service endpoint.
You can configure multiple instance endpoints when using Azure SignalR Service SDK through: ```cs
app.MapAzureSignalR(
options => { options.Endpoints = new ServiceEndpoint[] {
- // Note: this is just a demonstration of how to set options.Endpoints
- // Having ConnectionStrings explicitly set inside the code is not encouraged
+ // Note: this is just a demonstration of how to set options. Endpoints
+ // Having ConnectionStrings explicitly set inside the code is not encouraged.
// You can fetch it from a safe place such as Azure KeyVault new ServiceEndpoint("<ConnectionString1>"), new ServiceEndpoint("<ConnectionString2>"),
app.MapAzureSignalR(
}); ```
-### How to customize router?
+### Customize a router
The only difference between ASP.NET SignalR and ASP.NET Core SignalR is the http context type for `GetNegotiateEndpoint`. For ASP.NET SignalR, it is of [IOwinContext](https://github.com/Azure/azure-signalr/blob/dev/src/Microsoft.Azure.SignalR.AspNet/EndpointRouters/DefaultEndpointRouter.cs#L19) type.
-Below is the custom negotiate example for ASP.NET SignalR:
+The following code is a custom negotiate example for ASP.NET SignalR:
```cs private class CustomRouter : EndpointRouterDecorator
app.MapAzureSignalR(GetType().FullName, hub, options => {
## Service Endpoint Metrics
-To enable advanced router, SignalR server SDK provides multiple metrics to help server do smart decision. The properties are under `ServiceEndpoint.EndpointMetrics`.
+To enable an advanced router, SignalR server SDK provides multiple metrics to help server make smart decisions. The properties are under `ServiceEndpoint.EndpointMetrics`.
| Metric Name | Description |
-| -- | -- |
-| `ClientConnectionCount` | Total concurrent connected client connection count on all hubs for the service endpoint |
-| `ServerConnectionCount` | Total concurrent connected server connection count on all hubs for the service endpoint |
+|--|--|
+| `ClientConnectionCount` | Total count of concurrent client connections on all hubs for the service endpoint |
+| `ServerConnectionCount` | Total count of concurrent server connections on all hubs for the service endpoint |
| `ConnectionCapacity` | Total connection quota for the service endpoint, including client and server connections |
-Below is an example to customize router according to `ClientConnectionCount`.
+The following code is an example of customizing a router according to `ClientConnectionCount`.
```cs private class CustomRouter : EndpointRouterDecorator
From SDK version 1.5.0, we're enabling dynamic scale ServiceEndpoints for ASP.NE
> [!NOTE] >
-> Considering the time of connection set-up between server/service and client/service may be different, to ensure no message loss during the scale process, we have a staging period waiting for server connections be ready before open the new ServiceEndpoint to clients. Usually it takes seconds to complete and you'll be able to see log like `Succeed in adding endpoint: '{endpoint}'` which indicates the process complete. But for some unexpected reasons like cross-region network issue or configuration inconsistent on different app servers, the staging period will not be able to finish correctly. Since limited things can be done in these cases, we choose to promote the scale as it is. It's suggested to restart App Server when you find the scaling process not working correctly.
->
-> The default timeout period for the scale is 5 minutes, and it can be customized by changing the value in `ServiceOptions.ServiceScaleTimeout`. If you have a lot of app servers, it's suggested to extend the value a little bit more.
+> Considering the time of connection set-up between server/service and client/service may be different, to ensure no message loss during the scale process, we have a staging period waiting for server connections to be ready before opening the new ServiceEndpoint to clients. Usually it takes seconds to complete and you'll be able to see a log message like `Succeed in adding endpoint: '{endpoint}'` which indicates the process complete.
+>
+> In some expected situations, like cross-region network issues or configuration inconsistencies on different app servers, the staging period may not finish correctly. In these cases, it's suggested to restart the app server when you find the scaling process not working correctly.
+>
+> The default timeout period for the scale is 5 minutes, and it can be customized by changing the value in `ServiceOptions.ServiceScaleTimeout`. If you have a lot of app servers, it's suggested to extend the value a little bit more.
## Configuration in cross-region scenarios The `ServiceEndpoint` object has an `EndpointType` property with value `primary` or `secondary`.
-`primary` endpoints are preferred endpoints to receive client traffic, and are considered to have more reliable network connections; `secondary` endpoints are considered to have less reliable network connections and are used only for taking server to client traffic, for example, broadcasting messages, not for taking client to server traffic.
+Primary endpoints are preferred endpoints to receive client traffic because they've have more reliable network connections. Secondary endpoints have less reliable network connections and are used only for server to client traffic. For example, secondary endpoints are used for broadcasting messages instead of client to server traffic.
-In cross-region cases, network can be unstable. For one app server located in *East US*, the SignalR Service endpoint located in the same *East US* region can be configured as `primary` and endpoints in other regions marked as `secondary`. In this configuration, service endpoints in other regions can **receive** messages from this *East US* app server, but there will be no **cross-region** clients routed to this app server. The architecture is shown in the diagram below:
+In cross-region cases, the network can be unstable. For an app server located in *East US*, the SignalR Service endpoint located in the same *East US* region is `primary` and endpoints in other regions marked as `secondary`. In this configuration, service endpoints in other regions can **receive** messages from this *East US* app server, but no **cross-region** clients are routed to this app server. The following diagram shows the architecture:
![Cross-Geo Infra](./media/signalr-howto-scale-multi-instances/cross_geo_infra.png)
-When a client tries `/negotiate` with the app server, with the default router, SDK **randomly selects** one endpoint from the set of available `primary` endpoints. When the primary endpoint isn't available, SDK then **randomly selects** from all available `secondary` endpoints. The endpoint is marked as **available** when the connection between server and the service endpoint is alive.
+When a client tries `/negotiate` with the app server with a default router, the SDK **randomly selects** one endpoint from the set of available `primary` endpoints. When the primary endpoint isn't available, the SDK then **randomly selects** from all available `secondary` endpoints. The endpoint is marked as **available** when the connection between server and the service endpoint is alive.
-In cross-region scenario, when a client tries `/negotiate` with the app server hosted in *East US*, by default it always returns the `primary` endpoint located in the same region. When all *East US* endpoints aren't available, the client is redirected to endpoints in other regions. Fail over section below describes the scenario in detail.
+In a cross-region scenario, when a client tries `/negotiate` with the app server hosted in *East US*, by default it always returns the `primary` endpoint located in the same region. When all *East US* endpoints aren't available, the router redirects the client to endpoints in other regions. The following [failover](#failover) section describes the scenario in detail.
![Normal Negotiate](./media/signalr-howto-scale-multi-instances/normal_negotiate.png)
-## Fail-over
+## Failover
-When all `primary` endpoints aren't available, client's `/negotiate` picks from the available `secondary` endpoints. This fail-over mechanism requires that each endpoint should serve as `primary` endpoint to at least one app server.
+When no `primary` endpoint is available, the client's `/negotiate` picks from the available `secondary` endpoints. This failover mechanism requires that each endpoint serves as a `primary` endpoint to at least one app server.
-![Fail-over](./media/signalr-howto-scale-multi-instances/failover_negotiate.png)
+![Diagram showing the Failover mechanism process.](./media/signalr-howto-scale-multi-instances/failover_negotiate.png)
## Next steps
-In this guide, you learned about how to configure multiple instances in the same application for scaling, sharding, and cross-region scenarios.
-
-Multiple endpoints supports can also be used in high availability and disaster recovery scenarios.
+You can use multiple endpoints in high availability and disaster recovery scenarios.
> [!div class="nextstepaction"] > [Setup SignalR Service for disaster recovery and high availability](./signalr-concept-disaster-recovery.md)
azure-signalr Signalr Reference Data Plane Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-reference-data-plane-rest-api.md
Previously updated : 06/09/2022 Last updated : 03/29/2023 # Azure SignalR service data plane REST API reference
+In addition to the classic client-server pattern, Azure SignalR Service provides a set of REST APIs so that you can easily integrate real-time functionality into your serverless architecture.
+ > [!NOTE]
->
> Azure SignalR Service only supports using REST API to manage clients connected using ASP.NET Core SignalR. Clients connected using ASP.NET SignalR use a different data protocol that is not currently supported.
-On top of the classical client-server pattern, Azure SignalR Service provides a set of REST APIs so that you can easily integrate real-time functionality into your server-less architecture.
- <a name="serverless"></a>
-## Typical Server-less Architecture with Azure Functions
+## Typical serverless Architecture with Azure Functions
-The following diagram shows a typical server-less architecture using Azure SignalR Service with Azure Functions.
+The following diagram shows a typical serverless architecture using Azure SignalR Service with Azure Functions.
:::image type="content" source="./media/signalr-reference-data-plane-rest-api/serverless-arch.png" alt-text="Diagram of a typical serverless architecture for Azure SignalR service"::: -- `negotiate` function returns a negotiation response and redirects all clients to SignalR Service.-- `broadcast` function calls SignalR Service's REST API. Then SignalR Service will broadcast the message to all connected clients.
+- The `negotiate` function returns a negotiation response and redirects all clients to SignalR Service.
+- The `broadcast` function calls SignalR Service's REST API. The SignalR Service broadcasts the message to all connected clients.
-In a server-less architecture, clients still have persistent connections to the SignalR Service.
+In a serverless architecture, clients still have persistent connections to the SignalR Service.
Since there's no application server to handle traffic, clients are in `LISTEN` mode, which means they can only receive messages but can't send messages.
-SignalR Service will disconnect any client that sends messages because it's an invalid operation.
+SignalR Service disconnects any client that sends messages because it's an invalid operation.
You can find a complete sample of using SignalR Service with Azure Functions at [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/RealtimeSignIn). ## API
-The following table shows all versions of REST API we have for now. You can also find the swagger file for each version of REST API.
+The following table shows all supported versions of REST API. You can also find the swagger file for each version of REST API.
API Version | Status | Port | Doc | Spec ||||
API Version | Status | Port | Doc | Spec
`1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) `1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json)
-The latest available APIs are listed as following.
-
+The available APIs are listed as following.
| API | Path |
-| - | - |
-| [Get service health status.](./swagger/signalr-data-plane-rest-v20220601.md#head-get-service-health-status) | `HEAD /api/health` |
-| [Close all of the connections in the hub.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-all-of-the-connections-in-the-hub) | `POST /api/hubs/{hub}/:closeConnections` |
-| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/hubs/{hub}/:send` |
-| [Check if the connection with the given connectionId exists](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-the-connection-with-the-given-connectionid-exists) | `HEAD /api/hubs/{hub}/connections/{connectionId}` |
-| [Close the client connection](./swagger/signalr-data-plane-rest-v20220601.md#delete-close-the-client-connection) | `DELETE /api/hubs/{hub}/connections/{connectionId}` |
-| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v20220601.md#post-send-message-to-the-specific-connection) | `POST /api/hubs/{hub}/connections/{connectionId}/:send` |
-| [Check if there are any client connections inside the given group](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-there-are-any-client-connections-inside-the-given-group) | `HEAD /api/hubs/{hub}/groups/{group}` |
-| [Close connections in the specific group.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-connections-in-the-specific-group) | `POST /api/hubs/{hub}/groups/{group}/:closeConnections` |
-| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/hubs/{hub}/groups/{group}/:send` |
-| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v20220601.md#put-add-a-connection-to-the-target-group) | `PUT /api/hubs/{hub}/groups/{group}/connections/{connectionId}` |
-| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-connection-from-the-target-group) | `DELETE /api/hubs/{hub}/groups/{group}/connections/{connectionId}` |
-| [Remove a connection from all groups](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-connection-from-all-groups) | `DELETE /api/hubs/{hub}/connections/{connectionId}/groups` |
-| [Check if there are any client connections connected for the given user](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-there-are-any-client-connections-connected-for-the-given-user) | `HEAD /api/hubs/{hub}/users/{user}` |
-| [Close connections for the specific user.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-connections-for-the-specific-user) | `POST /api/hubs/{hub}/users/{user}/:closeConnections` |
-| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/hubs/{hub}/users/{user}/:send` |
-| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v20220601.md#head-check-whether-a-user-exists-in-the-target-group) | `HEAD /api/hubs/{hub}/users/{user}/groups/{group}` |
-| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v20220601.md#put-add-a-user-to-the-target-group) | `PUT /api/hubs/{hub}/users/{user}/groups/{group}` |
-| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-user-from-the-target-group) | `DELETE /api/hubs/{hub}/users/{user}/groups/{group}` |
-| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-user-from-all-groups) | `DELETE /api/hubs/{hub}/users/{user}/groups` |
+| - | - |
+| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` |
+| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` |
+| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` |
+| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` |
+| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` |
+| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
+| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
+| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` |
## Using REST API
The following claims are required to be included in the JWT token.
Claim Type | Is Required | Description ||
-`aud` | true | Needs to be the same as your HTTP request url, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`.
+`aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`.
`exp` | true | Epoch time when this token expires. ### Authenticate via Azure Active Directory Token (Azure AD Token) Similar to authenticating using `AccessKey`, when authenticating using Azure AD Token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
-The difference is, in this scenario the JWT Token is generated by Azure Active Directory.
+The difference is, in this scenario, the JWT Token is generated by Azure Active Directory. For more information, see [Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
-[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
-
-You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service.
-
-[Learn how to configure Role-based access control roles for your resource](/azure/azure-signalr/authorize-access-azure-active-directory)
+You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Azure Active Directory for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md)
### Implement Negotiate Endpoint
-As shown in the [architecture section](#serverless), you should implement a `negotiate` function that returns a redirect negotiation response so that client can connect to the service.
+As shown in the [architecture section](#serverless), you should implement a `negotiate` function that returns a redirect negotiation response so that clients can connect to the service.
A typical negotiation response looks as follows: ```json
A typical negotiation response looks as follows:
} ```
-The `accessToken` is generated using the same algorithm described in [authentication section](#authenticate-via-azure-signalr-service-accesskey). The only difference is the `aud` claim should be same as `url`.
-
-You should host your negotiate API in `https://<hub_url>/negotiate` so you can still use SignalR client to connect to the hub url.
+The `accessToken` is generated using the same algorithm described in the [authentication section](#authenticate-via-azure-signalr-service-accesskey). The only difference is the `aud` claim should be the same as `url`.
-Read more about redirecting client to Azure SignalR Service at [here](./signalr-concept-internals.md#client-connections).
+You should host your negotiate API in `https://<hub_url>/negotiate` so you can still use SignalR client to connect to the hub url. Read more about redirecting client to Azure SignalR Service at [here](./signalr-concept-internals.md#client-connections).
### User-related REST API
-In order to call user-related REST API, each of your clients should identify itself to SignalR Service.
-Otherwise SignalR Service can't find target connections from a given user ID.
+In order to the call user-related REST API, each of your clients should identify themselves to SignalR Service. Otherwise SignalR Service can't find target connections from a given user ID.
Client identification can be achieved by including a `nameid` claim in each client's JWT token when they're connecting to SignalR Service.
-Then SignalR Service will use the value of `nameid` claim as the user ID of each client connection.
+Then SignalR Service uses the value of `nameid` claim as the user ID of each client connection.
### Sample
Currently, we have the following limitation for REST API requests:
* Header size is a maximum of 16 KB. * Body size is a maximum of 1 MB.
-If you want to send message larger than 1 MB, use the Management SDK with `persistent` mode.
+If you want to send messages larger than 1 MB, use the Management SDK with `persistent` mode.
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Last updated 11/18/2022
vRealize Operations is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter Server, ESXi, NSX-T Data Center, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter Server, NSX-T Data Center, vSAN, and HCX deployment.
-Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the three typical deployment topologies:
+Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the two typical deployment topologies:
> [!div class="checklist"] > * [On-premises vRealize Operations managing Azure VMware Solution deployment](#on-premises-vrealize-operations-managing-azure-vmware-solution-deployment) > * [vRealize Operations Cloud managing Azure VMware Solution deployment](#vrealize-operations-cloud-managing-azure-vmware-solution-deployment)
-> * [vRealize Operations running on Azure VMware Solution deployment](#vrealize-operations-running-on-azure-vmware-solution-deployment)
## Before you begin * Review the [vRealize Operations Manager product documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) to learn more about deploying vRealize Operations.
VMware vRealize Operations Cloud supports the Azure VMware Solution, including t
> [!IMPORTANT] > Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations/Cloud/com.vmware.vcom.config.doc/GUID-6CDFEDDC-A72C-4AB4-B8E8-84542CC6CE27.html) for step-by-step guide for connecting vRealize Operations Cloud to Azure VMware Solution.
-## vRealize Operations running on Azure VMware Solution deployment
-
-Another option is to deploy an instance of vRealize Operations Manager on a vSphere cluster in the private cloud.
-
->[!IMPORTANT]
->This option isn't currently supported by VMware.
--
-Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter Server, ESXi, NSX-T Data Center, vSAN, and HCX.
- ## Known limitations - The **cloudadmin@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-identity.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case.
backup Move To Azure Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/move-to-azure-monitor-alerts.md
Title: Switch to Azure Monitor based alerts for Azure Backup description: This article describes the new and improved alerting capabilities via Azure Monitor and the process to configure Azure Monitor. Previously updated : 09/14/2022 Last updated : 03/31/2023
The following table lists the differences between classic backup alerts and buil
| **Notification suppression for database backup scenarios** | When there are multiple failures for the same database due to the same error code, a single alert is generated (with the occurrence count updated for each failure type) and a new alert is only generated when the original alert is inactivated. | The behavior is currently different. Here, a separate alert is generated for every backup failure. If there's a window of time when backups will fail for a certain known item (for example, during a maintenance window), you can create a suppression rule to suppress email noise for that backup item during the given period. | | **Pricing** | There are no additional charges for this solution. | Alerts for critical operations/failures generate by default (that you can view in the Azure portal or via non-portal interfaces) at no additional charge. However, to route these alerts to a notification channel (such as email), it incurs a minor charge for notifications beyond the *free tier* (of 1000 emails per month). Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). |
+> [!NOTE]
+>- If you've existing custom Azure Resource Graph (ARG) queries written on classic alerts data, you'll need to update these queries to fetch information from Azure Monitor-based alerts. You can use the *AlertsManagementResources* table in ARG to query Azure Monitor alerts data.
+>- If you send classic alerts to Log Analytics workspace/Storage account/Event Hub via diagnostics settings, you'll also need to update these automation. To send the fired Azure Monitor based alerts to a destination of your choice, you can create an alert processing rule and action group that routes these alerts to a logic app, webhook, or runbook that in turn sends these alerts to the required destination.
+ Azure Backup now provides a guided experience via Backup center that allows you to switch to built-in Azure Monitor alerts and notifications with just a few selects. To perform this action, you need to have access to the *Backup Contributor* and *Monitoring Contributor* Azure role-based access control (Azure RBAC) roles to the subscription. Follow these steps:
Follow these steps:
## Suppress notifications during a planned maintenance window
-For certain scenarios, you might want to suppress notifications for a particular window of time when backups are going to fail. This is especially important for database backups, where log backups could happen as frequently as every 15 minutes, and you don't want to receive a separate notification every 15 minutes for each failure occurrence. In such a scenario, you can create a second alert processing rule that exists alongside the main alert processing rule used for sending notifications. The second alert processing rule won't be linked to an action group, but is used to specify the time for notification types tha notification should be suppressed.
+For certain scenarios, you might want to suppress notifications for a particular window of time when backups are going to fail. This is especially important for database backups, where log backups could happen as frequently as every 15 minutes, and you don't want to receive a separate notification every 15 minutes for each failure occurrence. In such a scenario, you can create a second alert processing rule that exists alongside the main alert processing rule used for sending notifications. The second alert processing rule won't be linked to an action group, but is used to specify the time for notification types that should be suppressed.
By default, the suppression alert processing rule takes priority over the other alert processing rule. If a single fired alert is affected by different alert processing rules of both types, the action groups of that alert will be suppressed.
To create a suppression alert processing rule, follow these steps:
1. Select **Scope**, for example, subscription or resource group, that the alert processing rule should span.
- You can also select more granular filters if you want to suppress notifications only for a particular backup item. For example, if you want to suppress notifications for *testdb1* database within Virtual Machine *VM1*, you can specify filters "where Alert Context (payload) contains /subscriptions/00000000-0000-0000-0000-0000000000000/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachines/VM1/providers/Microsoft.RecoveryServices/backupProtectedItem/SQLDataBase;MSSQLSERVER;testdb1".
+ You can also select more granular filters if you want to suppress notifications only for a particular backup item. For example, if you want to suppress notifications for *testdb1* database in the Virtual Machine *VM1*, you can specify filters "where Alert Context (payload) contains `/subscriptions/00000000-0000-0000-0000-0000000000000/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachines/VM1/providers/Microsoft.RecoveryServices/backupProtectedItem/SQLDataBase;MSSQLSERVER;testdb1`".
To get the required format of your required backup item, see the *SourceId field* from the [Alert details page](backup-azure-monitoring-built-in-monitor.md?tabs=recovery-services-vaults#viewing-fired-alerts-in-the-azure-portal).
To configure the same, run the following commands:
``` ## Next steps
-Learn more about [Azure Backup monitoring and reporting](monitoring-and-alerts-overview.md).
------
-
+Learn more about [Azure Backup monitoring and reporting](monitoring-and-alerts-overview.md).
cognitive-services Copy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/copy-model.md
+
+ Title: Copy a custom model
+
+description: This article explains how to copy a custom model to another workspace using the Azure Cognitive Services Custom Translator.
++++ Last updated : 03/31/2023+++
+# Copy a custom model
+
+Copying a model to other workspaces enables model lifecycle management (for example, development → test → production) and increases usage scalability while reducing the training cost.
+
+## Copy model to another workspace
+
+ > [!Note]
+ >
+ > To copy model from one workspace to another, you must have an **Owner** role in both workspaces.
+ >
+ > The copied model cannot be recopied. You can only rename, delete, or publish a copied model.
+
+1. After successful model training, select the **Model details** blade.
+
+1. Select the **Model Name** to copy.
+
+1. Select **Copy to workspace**.
+
+1. Fill out the target details.
+
+1. Select **Copy model**.
+
+1. A notification panel shows the copy progress. The process should complete fairly quickly:
+
+1. Complete the **workspace**, **project**, and **model name** sections of the copy model dialog window:
+
+ :::image type="content" source="../media/how-to/copy-model-1.png" alt-text="Screenshot illustrating the copy model dialog window.":::
+
+1. A **notifications** window displays the copy process status:
+
+ :::image type="content" source="../media/how-to/copy-model-2.png" alt-text="Screenshot illustrating notification that the copy model is in process.":::
+
+1. A **model details** window appears when the copy process is complete.
+
+ :::image type="content" source="../media/how-to/copy-model-3.png" alt-text="Screenshot illustrating the copy complete dialog window.":::
+
+ > [!Note]
+ >
+ > A dropdown list displays the list of workspaces available to use. Otherwise, click **Create a new workspace**.
+ > If selected workspace contains a project for the same language pair, it can be selected from the Project dropdown list, otherwise, click **Create a new project** to create one.
+
+1. After **Copy model** completion, a copied model is available in the target workspace and ready to publish. A **Copied model** watermark is appended to the model name.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to publish/deploy a custom model](publish-model.md).
cognitive-services Platform Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/platform-upgrade.md
+
+ Title: "Platform upgrade - Custom Translator"
+
+description: Custom Translator v1.0 upgrade
++++ Last updated : 03/30/2023+++
+# Custom Translator platform upgrade
+
+> [!CAUTION]
+>
+> On June 02, 2023, Microsoft will retire the Custom Translator v1.0 model platform. Existing v1.0 models must migrate to the v2.0 platform for continued processing and support.
+
+Following measured and consistent high-quality results using models trained on the Custom Translator v2.0 platform, the v1.0 platform will be retired. Custom Translator v2.0 delivers significant improvements in many domains compared to both standard and Custom v1.0 platform translations. Migrate your v1.0 models to the v2.0 platform by June 02, 2023.
+
+## Custom Translator v1.0 upgrade timeline
+
+* **May 01, 2023** → Custom Translator v1.0 model publishing ends. There's no downtime during the v1.0 model migration. All model publishing and in-flight translation requests will continue without disruption until June 02, 2023.
+
+* **May 01, 2023 through June 02, 2023** → Customers voluntarily migrate to v2.0 models.
+
+* **June 08, 2023** → Remaining v1.0 published models migrate automatically and are published by the Custom Translator team.
+
+## Upgrade to v2.0
+
+* **Check to see if you have published v1.0 models**. After signing in to the Custom Translator portal, you'll see a message indicating that you have v1.0 models to upgrade. You can also check to see if a current workspace has v1.0 models by selecting **Workspace settings** and scrolling to the bottom of the page.
+
+* **Use the upgrade wizard**. Follow the steps listed in **Upgrade to the latest version** wizard. Depending on your training data size, it may take from a few hours to a full day to upgrade your models to the v2.0 platform.
+
+## Unpublished and opt-out published models
+
+* For unpublished models, save the model data (training, testing, dictionary) and delete the project.
+
+* For published models that you don't want to upgrade, save your model data (training, testing, dictionary), unpublish the model, and delete the project.
+
+## Next steps
+
+For more support, visit [Azure Cognitive Services support and help options](../../cognitive-services-support-options.md).
cognitive-services Cognitive Services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-data-loss-prevention.md
Previously updated : 07/02/2021 #Required; mm/dd/yyyy format. Last updated : 03/31/2023 #Required; mm/dd/yyyy format.
There are two parts to enable data loss prevention. First the property restrictO
The following services support data loss prevention configuration:
+- Azure OpenAI
- Computer Vision - Content Moderator - Custom Vision
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Use the table below to find which model versions are supported by each feature:
| Sentiment Analysis and opinion mining | `2021-10-01`, `2022-06-01`,`2022-10-01`,`2022-11-01*` | | Language Detection | `2021-11-20`, `2022-10-01*` | | Entity Linking | `2021-06-01*` |
-| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview` |
+| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview`, `2023-02-01-preview**` |
| Personally Identifiable Information (PII) detection | `2020-07-01`, `2021-01-15*`, `2023-01-01-preview**` | | PII detection for conversations (Preview) | `2022-05-15-preview**` | | Question answering | `2021-10-01*` |
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* China East 2 (Authoring and Prediction) * China North 2 (Prediction) * New model evaluation updates for Conversational language understanding and Orchestration workflow.
-* New model version ('2023-01-01-preview') for Text Analytics for health featuring new [entity categories](./text-analytics-for-health/concepts/health-entity-categories.md) for social determinants of health
+* New model version ('2023-01-01-preview') for Text Analytics for health featuring new [entity categories](./text-analytics-for-health/concepts/health-entity-categories.md) for social determinants of health.
+* New model version ('2023-02-01-preview') for named entity recognition features improved accuracy.
## December 2022
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Title: 'Quickstart: Deploy your first container app'
-description: Deploy your first application to Azure Container Apps.
+ Title: 'Quickstart: Deploy your first container app with containerapp up'
+description: Deploy your first application to Azure Container Apps using the Azure CLI containerapp up command.
-+ Previously updated : 03/21/2022-- Last updated : 03/29/2023++ ms.devlang: azurecli
-# Quickstart: Deploy your first container app
+# Quickstart: Deploy your first container app with containerapp up
The Azure Container Apps service enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
-In this quickstart, you create a secure Container Apps environment and deploy your first container app.
+In this quickstart, you create and deploy your first container app using the `az containerapp up` command.
## Prerequisites
In this quickstart, you create a secure Container Apps environment and deploy yo
- If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). - Install the [Azure CLI](/cli/azure/install-azure-cli).
+## Setup
-# [Bash](#tab/bash)
+To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process.
-To create the environment, run the following command:
+# [Bash](#tab/bash)
```azurecli
-az containerapp env create \
- --name $CONTAINERAPPS_ENVIRONMENT \
- --resource-group $RESOURCE_GROUP \
- --location $LOCATION
+az login
``` # [Azure PowerShell](#tab/azure-powershell)
-A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
- ```azurepowershell
-$WorkspaceArgs = @{
- Name = 'myworkspace'
- ResourceGroupName = $ResourceGroupName
- Location = $Location
- PublicNetworkAccessForIngestion = 'Enabled'
- PublicNetworkAccessForQuery = 'Enabled'
-}
-New-AzOperationalInsightsWorkspace @WorkspaceArgs
-$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
-$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+az login
```
-To create the environment, run the following command:
++
+Ensure you're running the latest version of the CLI via the upgrade command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az upgrade
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
```azurepowershell
-$EnvArgs = @{
- EnvName = $ContainerAppsEnvironment
- ResourceGroupName = $ResourceGroupName
- Location = $Location
- AppLogConfigurationDestination = 'log-analytics'
- LogAnalyticConfigurationCustomerId = $WorkspaceId
- LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
-}
-
-New-AzContainerAppManagedEnv @EnvArgs
+az upgrade
```
-## Create a container app
-
-Now that you have an environment created, you can deploy your first container app. With the `containerapp create` command, deploy a container image to Azure Container Apps.
+Next, install or update the Azure Container Apps extension for the CLI.
# [Bash](#tab/bash) ```azurecli
-az containerapp create \
- --name my-container-app \
- --resource-group $RESOURCE_GROUP \
- --environment $CONTAINERAPPS_ENVIRONMENT \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
- --target-port 80 \
- --ingress 'external' \
- --query properties.configuration.ingress.fqdn
+az extension add --name containerapp --upgrade
```
-> [!NOTE]
-> Make sure the value for the `--image` parameter is in lower case.
-
-By setting `--ingress` to `external`, you make the container app available to public requests.
- # [Azure PowerShell](#tab/azure-powershell) + ```azurepowershell
-$ImageParams = @{
- Name = 'my-container-app'
- Image = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
-}
-$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
-$EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
-
-$AppArgs = @{
- Name = 'my-container-app'
- Location = $Location
- ResourceGroupName = $ResourceGroupName
- ManagedEnvironmentId = $EnvId
- IdentityType = 'SystemAssigned'
- TemplateContainer = $TemplateObj
- IngressTargetPort = 80
- IngressExternal = $true
-
-}
-New-AzContainerApp @AppArgs
+az extension add --name containerapp --upgrade
```
-> [!NOTE]
-> Make sure the value for the `Image` parameter is in lower case.
-
-By setting `IngressExternal` to `$true`, you make the container app available to public requests.
-
-## Verify deployment
+Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces if you haven't already registered them in your Azure subscription.
# [Bash](#tab/bash)
-The `create` command returns the fully qualified domain name for the container app. Copy this location to a web browser.
+```azurecli
+az provider register --namespace Microsoft.App
+```
-# [Azure PowerShell](#tab/azure-powershell)
+```azurecli
+az provider register --namespace Microsoft.OperationalInsights
+```
-Get the fully qualified domain name for the container app.
+# [Azure PowerShell](#tab/azure-powershell)
```azurepowershell
-(Get-AzContainerApp -Name $AppArgs.Name -ResourceGroupName $ResourceGroupName).IngressFqdn
+az provider register --namespace Microsoft.App
```
-Copy this location to a web browser.
+```azurepowershell
+az provider register --namespace Microsoft.OperationalInsights
+```
- The following message is displayed when the container app is deployed:
+Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
-## Clean up resources
+## Create and deploy the container app
-If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
+Create and deploy your first container app with the `containerapp up` command. This command will:
+
+- Create the resource group
+- Create the Container Apps environment
+- Create the Log Analytics workspace
+- Create and deploy the container app using a public container image
+
+Note that if any of these resources already exist, the command will use them instead of creating new ones.
->[!CAUTION]
-> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
# [Bash](#tab/bash) ```azurecli
-az group delete --name $RESOURCE_GROUP
+az containerapp up \
+ --name my-container-app \
+ --resource-group my-container-apps \
+ --location centralus \
+ --environment 'my-container-apps' \
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --target-port 80 \
+ --ingress external \
+ --query properties.configuration.ingress.fqdn
``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-Remove-AzResourceGroup -Name $ResourceGroupName -Force
+```powershell
+az containerapp up `
+ --name my-container-app `
+ --resource-group my-container-apps `
+ --location centralus `
+ --environment my-container-apps `
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest `
+ --target-port 80 `
+ --ingress external `
+ --query properties.configuration.ingress.fqdn
```
+> [!NOTE]
+> Make sure the value for the `--image` parameter is in lower case.
+
+By setting `--ingress` to `external`, you make the container app available to public requests.
+
+## Verify deployment
+
+The `up` command returns the fully qualified domain name for the container app. Copy this location to a web browser.
+
+The following message is displayed when the container app is deployed:
++
+## Clean up resources
+
+If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
++
+```azurecli
+az group delete --name my-container-apps
+```
+ > [!TIP] > Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
Title: "Quickstart: Deploy your code to Azure Container Apps"
-description: Code to cloud deploying your application to Azure Container Apps
+ Title: "Quickstart: Build and deploy your app from a repository to Azure Container Apps"
+description: Build your container app from a local or GitHub source repository and deploy in Azure Container Apps using az containerapp up.
-+ Previously updated : 05/11/2022-
-zone_pivot_groups: container-apps-image-build-type
Last updated : 03/29/2023+
+zone_pivot_groups: container-apps-image-build-from-repo
-# Quickstart: Deploy your code to Azure Container Apps
+
+# Quickstart: Build and deploy your container app from a repository in Azure Container Apps
This article demonstrates how to build and deploy a microservice to Azure Container Apps from a source repository using the programming language of your choice.
-This quickstart is the first in a series of articles that walk you through how to use core capabilities within Azure Container Apps. The first step is to create a back end web API service that returns a static collection of music albums.
+In this quickstart, you create a backend web API service that returns a static collection of music albums. After completing this quickstart, you can continue to [Tutorial: Communication between microservices in Azure Container Apps](communicate-between-microservices.md) to learn how to deploy a front end application that calls the API.
+
+> [!NOTE]
+> You can also build and deploy this sample application using the `az containerapp up` command. For more information, see [Tutorial: Build and deploy your app to Azure Container Apps](tutorial-code-to-cloud.md).
-The following screenshot shows the output from the album API deployed in this quickstart.
+The following screenshot shows the output from the album API service you deploy.
:::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint."::: ## Prerequisites
-To complete this project, you'll need the following items:
+To complete this project, you need the following items:
+ | Requirement | Instructions | |--|--| | Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
-| GitHub Account | Get an account for [free](https://github.com/join). |
+| GitHub Account | Get one for [free](https://github.com/join). |
| git | [Install git](https://git-scm.com/downloads) | | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).| ::: zone-end | Requirement | Instructions | |--|--|
-| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
-| GitHub Account | Get an account for [free](https://github.com/join). |
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Get one for [free](https://github.com/join). |
| git | [Install git](https://git-scm.com/downloads) | | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
-| Docker Desktop | Docker provides installers that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). <br><br>From your command prompt, type `docker` to ensure Docker is running. |
::: zone-end -
-Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
--
-## Prepare the GitHub repository
-
-Navigate to the repository for your preferred language and fork the repository.
-
-# [C#](#tab/csharp)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account.
+## Setup
-Now you can clone your fork of the sample repository.
+To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process.
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Bash](#tab/bash)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-csharp.git code-to-cloud
+```azurecli
+az login
```
-# [Go](#tab/go)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account.
-
-Now you can clone your fork of the sample repository.
-
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Azure PowerShell](#tab/azure-powershell)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-go.git code-to-cloud
+```azurepowershell
+az login
```
-# [JavaScript](#tab/javascript)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account.
+
-Now you can clone your fork of the sample repository.
+Ensure you're running the latest version of the CLI via the upgrade command.
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Bash](#tab/bash)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-javascript.git code-to-cloud
+```azurecli
+az upgrade
```
-# [Python](#tab/python)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account.
-
-Now you can clone your fork of the sample repository.
-
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Azure PowerShell](#tab/azure-powershell)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-python.git code-to-cloud
+```azurepowershell
+az upgrade
```
-Next, change the directory into the root of the cloned repo.
-
-```console
-cd code-to-cloud/src
-```
-
-## Create an Azure Resource Group
-
-Create a resource group to organize the services related to your container app deployment.
+Next, install or update the Azure Container Apps extension for the CLI.
# [Bash](#tab/bash) ```azurecli
-az group create \
- --name $RESOURCE_GROUP \
- --location "$LOCATION"
+az extension add --name containerapp --upgrade
``` # [Azure PowerShell](#tab/azure-powershell) + ```azurepowershell
-New-AzResourceGroup -Location $Location -Name $ResourceGroup
+az extension add --name containerapp --upgrade
```
-## Create an Azure Container Registry
-
-Next, create an Azure Container Registry (ACR) instance in your resource group to store the album API container image once it's built.
+Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces if you haven't already registered them in your Azure subscription.
# [Bash](#tab/bash) ```azurecli
-az acr create \
- --resource-group $RESOURCE_GROUP \
- --name $ACR_NAME \
- --sku Basic \
- --admin-enabled true
+az provider register --namespace Microsoft.App
+```
+
+```azurecli
+az provider register --namespace Microsoft.OperationalInsights
``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-$acr = New-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $ACRName -Sku Basic -EnableAdminUser
+az provider register --namespace Microsoft.App
+```
+
+```azurepowershell
+az provider register --namespace Microsoft.OperationalInsights
```
+Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
+
-## Build your application
+# [Bash](#tab/bash)
-With [ACR tasks](../container-registry/container-registry-tasks-overview.md), you can build and push the docker image for the album API without installing Docker locally.
+Define the following variables in your bash shell.
-### Build the container with ACR
+```azurecli
+RESOURCE_GROUP="album-containerapps"
+LOCATION="canadacentral"
+ENVIRONMENT="env-album-containerapps"
+API_NAME="album-api"
+FRONTEND_NAME="album-ui"
+GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
+```
-Run the following command to initiate the image build and push process using ACR. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+Before you run this command, make sure to replace `<YOUR_GITHUB_USERNAME>` with your GitHub username.
-# [Bash](#tab/bash)
+Next, define a container registry name unique to you.
```azurecli
-az acr build --registry $ACR_NAME --image $API_NAME .
+ACR_NAME="acaalbums"$GITHUB_USERNAME
``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-az acr build --registry $ACRName --image $APIName .
-```
+Define the following variables in your PowerShell console.
--
-Output from the `az acr build` command shows the upload progress of the source code to Azure and the details of the `docker build` and `docker push` operations.
+```powershell
+$RESOURCE_GROUP="album-containerapps"
+$LOCATION="canadacentral"
+$ENVIRONMENT="env-album-containerapps"
+$API_NAME="album-api"
+$FRONTEND_NAME="album-ui"
+$GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
+```
+Before you run this command, make sure to replace `<YOUR_GITHUB_USERNAME>` with your GitHub username.
+Next, define a container registry name unique to you.
-## Build your application
+```powershell
+$ACR_NAME="acaalbums"+$GITHUB_USERNAME
+```
-The following steps, demonstrate how to build your container image locally using Docker and push the image to the new container registry.
+
-### Build the container with Docker
-The following command builds a container image for the album API and tags it with the fully qualified name of the ACR login server. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
# [Bash](#tab/bash)
+Define the following variables in your bash shell.
+ ```azurecli
-docker build --tag $ACR_NAME.azurecr.io/$API_NAME .
+RESOURCE_GROUP="album-containerapps"
+LOCATION="canadacentral"
+ENVIRONMENT="env-album-containerapps"
+API_NAME="album-api"
``` # [Azure PowerShell](#tab/azure-powershell)
+Define the following variables in your PowerShell console.
+ ```powershell
-docker build --tag "$ACRName.azurecr.io/$APIName" .
+$RESOURCE_GROUP="album-containerapps"
+$LOCATION="canadacentral"
+$ENVIRONMENT="env-album-containerapps"
+$API_NAME="album-api"
```
-### Push the image to your container registry
-First, sign in to your Azure Container Registry.
+## Prepare the GitHub repository
-# [Bash](#tab/bash)
+In a browser window, go to the GitHub repository for your preferred language and fork the repository.
-```azurecli
-az acr login --name $ACR_NAME
-```
+# [C#](#tab/csharp)
-# [Azure PowerShell](#tab/azure-powershell)
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account.
-```powershell
-az acr login --name $ACRName
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-csharp.git code-to-cloud
``` -
-Now, push the image to your registry.
+# [Go](#tab/go)
-# [Bash](#tab/bash)
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account.
-```azurecli
-docker push $ACR_NAME.azurecr.io/$API_NAME
-```
-# [Azure PowerShell](#tab/azure-powershell)
+Now you can clone your fork of the sample repository.
-```powershell
-docker push "$ACRName.azurecr.io/$APIName"
-```
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
-
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-go.git code-to-cloud
+```
::: zone-end
-## Create a Container Apps environment
+# [JavaScript](#tab/javascript)
-The Azure Container Apps environment acts as a secure boundary around a group of container apps.
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account.
-Create the Container Apps environment using the following command.
-# [Bash](#tab/bash)
+Now you can clone your fork of the sample repository.
-```azurecli
-az containerapp env create \
- --name $ENVIRONMENT \
- --resource-group $RESOURCE_GROUP \
- --location "$LOCATION"
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-javascript.git code-to-cloud
```
-# [Azure PowerShell](#tab/azure-powershell)
-A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
+# [Python](#tab/python)
-```azurepowershell
-$WorkspaceArgs = @{
- Name = 'my-album-workspace'
- ResourceGroupName = $ResourceGroup
- Location = $Location
- PublicNetworkAccessForIngestion = 'Enabled'
- PublicNetworkAccessForQuery = 'Enabled'
-}
-New-AzOperationalInsightsWorkspace @WorkspaceArgs
-$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroup -Name $WorkspaceArgs.Name).CustomerId
-$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroup -Name $WorkspaceArgs.Name).PrimarySharedKey
-```
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account.
-To create the environment, run the following command:
-```azurepowershell
-$EnvArgs = @{
- EnvName = $Environment
- ResourceGroupName = $ResourceGroup
- Location = $Location
- AppLogConfigurationDestination = 'log-analytics'
- LogAnalyticConfigurationCustomerId = $WorkspaceId
- LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
-}
-
-New-AzContainerAppManagedEnv @EnvArgs
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-python.git code-to-cloud
``` +
-## Deploy your image to a container app
-Now that you have an environment created, you can create and deploy your container app with the `az containerapp create` command.
+## Build and deploy the container app
-Create and deploy your container app with the following command.
+Build and deploy your first container app from your local git repository with the `containerapp up` command. This command will:
+
+- Create the resource group
+- Create an Azure Container Registry
+- Build the container image and push it to the registry
+- Create the Container Apps environment with a Log Analytics workspace
+- Create and deploy the container app using a public container image
+
+The `up` command uses the Docker file in the root of the repository to build the container image. The target port is defined by the EXPOSE instruction in the Docker file. A Docker file isn't required to build a container app.
# [Bash](#tab/bash) ```azurecli
-az containerapp create \
+az containerapp up \
--name $API_NAME \ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
--environment $ENVIRONMENT \
- --image $ACR_NAME.azurecr.io/$API_NAME \
- --target-port 3500 \
- --ingress 'external' \
- --registry-server $ACR_NAME.azurecr.io \
- --query properties.configuration.ingress.fqdn
+ --source code-to-cloud/src
```
-* By setting `--ingress` to `external`, your container app will be accessible from the public internet.
+# [Azure PowerShell](#tab/azure-powershell)
-* The `target-port` is set to `3500` to match the port that the container is listening to for requests.
+```powershell
+az containerapp up `
+ --name $API_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION `
+ --environment $ENVIRONMENT `
+ --source code-to-cloud/src
+```
-* Without a `query` property, the call to `az containerapp create` returns a JSON response that includes a rich set of details about the application. Adding a query parameter filters the output to just the app's fully qualified domain name (FQDN).
+
-# [Azure PowerShell](#tab/azure-powershell)
-To create the container app, create template objects that you'll pass in as arguments to the `New-AzContainerApp` command.
+## Build and deploy the container app
-Create a template object to define your container image parameters.
+Build and deploy your first container app from your forked GitHub repository with the `containerapp up` command. This command will:
-```azurepowershell
-$ImageParams = @{
- Name = $APIName
- Image = $ACRName + '.azurecr.io/' + $APIName + ':latest'
-}
-$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
-```
+- Create the resource group
+- Create an Azure Container Registry
+- Build the container image and push it to the registry
+- Create the Container Apps environment with a Log Analytics workspace
+- Create and deploy the container app using a public container image
+- Create a GitHub Action workflow to build and deploy the container app
-You'll need run the following command to get your registry credentials.
+The `up` command uses the Docker file in the root of the repository to build the container image. The target port is defined by the EXPOSE instruction in the Docker file. A Docker file isn't required to build a container app.
-```azurepowershell
-$RegistryCredentials = Get-AzContainerRegistryCredential -Name $ACRName -ResourceGroupName $ResourceGroup
-```
+Replace the `<YOUR_GITHUB_REPOSITORY_NAME>` with your GitHub repository name in the form of `https://github.com/<owner>/<repository-name>` or `<owner>/<repository-name>`.
-Create a registry credential object to define your registry information, and a secret object to define your registry password. The `PasswordSecretRef` refers to the `Name` in the secret object.
+# [Bash](#tab/bash)
-```azurepowershell
-$RegistryArgs = @{
- Server = $ACRName + '.azurecr.io'
- PasswordSecretRef = 'registrysecret'
- Username = $RegistryCredentials.Username
-}
-$RegistryObj = New-AzContainerAppRegistryCredentialObject @RegistryArgs
-
-$SecretObj = New-AzContainerAppSecretObject -Name 'registrysecret' -Value $RegistryCredentials.Password
+```azurecli
+az containerapp up \
+ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --environment $ENVIRONMENT \
+ --context-path ./src \
+ --repo <YOUR_GITHUB_REPOSITORY_NAME>
```
-Get your environment ID.
+# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-$EnvId = (Get-AzContainerAppManagedEnv -EnvName $Environment -ResourceGroup $ResourceGroup).Id
+```powershell
+az containerapp up `
+ --name $API_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION `
+ --environment $ENVIRONMENT `
+ --context-path ./src `
+ --repo <YOUR_GITHUB_REPOSITORY_NAME>
```
-Create the container app.
+
-```azurepowershell
-$AppArgs = @{
- Name = $APIName
- Location = $Location
- ResourceGroupName = $ResourceGroup
- ManagedEnvironmentId = $EnvId
- TemplateContainer = $TemplateObj
- ConfigurationRegistry = $RegistryObj
- ConfigurationSecret = $SecretObj
- IngressTargetPort = 3500
- IngressExternal = $true
-}
-$MyApp = New-AzContainerApp @AppArgs
-
-# show the app's fully qualified domain name (FQDN).
-$MyApp.IngressFqdn
-```
+Using the URL and the user code displayed in the terminal, go to the GitHub device activation page in a browser and enter the user code to the page. Follow the prompts to authorize the Azure CLI to access your GitHub repository.
+
-* By setting `IngressExternal` to `external`, your container app will be accessible from the public internet.
-* The `IngressTargetPort` parameter is set to `3500` to match the port that the container is listening to for requests.
+The `up` command creates a GitHub Action workflow in your repository *.github/workflows* folder. The workflow is triggered to build and deploy your container app when you push changes to the repository.
## Verify deployment
-Copy the FQDN to a web browser. From your web browser, navigate to the `/albums` endpoint of the FQDN.
+Copy the FQDN to a web browser. From your web browser, go to the `/albums` endpoint of the FQDN.
:::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint."::: ## Clean up resources
-If you're not going to continue on to the [Communication between microservices](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart. Run the following command to delete the resource group along with all the resources created in this quickstart.
+If you're not going to continue on to the [Deploy a frontend](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart with the following command.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If the group contains resources outside the scope of this quickstart, they are also deleted.
# [Bash](#tab/bash)
az group delete --name $RESOURCE_GROUP
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-Remove-AzResourceGroup -Name $ResourceGroup -Force
+```powershell
+az group delete --name $RESOURCE_GROUP
```
Remove-AzResourceGroup -Name $ResourceGroup -Force
## Next steps
-This quickstart is the entrypoint for a set of progressive tutorials that showcase the various features within Azure Container Apps. Continue on to learn how to enable communication from a web front end that calls the API you deployed in this article.
- > [!div class="nextstepaction"] > [Tutorial: Communication between microservices](communicate-between-microservices.md)
container-apps Tutorial Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-code-to-cloud.md
+
+ Title: "Tutorial: Build and deploy your app to Azure Container Apps"
+description: Build and deploy your app to Azure Container Apps with az containerapp create command.
++++ Last updated : 05/11/2022+
+zone_pivot_groups: container-apps-image-build-type
+++
+# Tutorial: Build and deploy your app to Azure Container Apps
+
+This article demonstrates how to build and deploy a microservice to Azure Container Apps from a source repository using the programming language of your choice.
+
+This tutorial is the first in a series of articles that walk you through how to use core capabilities within Azure Container Apps. The first step is to create a back end web API service that returns a static collection of music albums.
+
+> [!NOTE]
+> You can also build and deploy this app using the [az containerapp up](/cli/azure/containerapp#az_containerapp_up) by following the instructions in the [Quickstart: Build and deploy an app to Azure Container Apps from a repository](quickstart-code-to-cloud.md) article. The `az containerapp up` command is a fast and convenient way to build and deploy your app to Azure Container Apps using a single command. However, it doesn't provide the same level of customization for your container app.
+
+ The next tutorial in the series will build and deploy the front end web application to Azure Container Apps.
+
+The following screenshot shows the output from the album API deployed in this tutorial.
++
+## Prerequisites
+
+To complete this project, you need the following items:
++
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Sign up for [free](https://github.com/join). |
+| git | [Install git](https://git-scm.com/downloads) |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+++
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Sign up for [free](https://github.com/join). |
+| git | [Install git](https://git-scm.com/downloads) |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+| Docker Desktop | Docker provides installers that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). <br><br>From your command prompt, type `docker` to ensure Docker is running. |
+++
+Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
+++
+## Prepare the GitHub repository
+
+Navigate to the repository for your preferred language and fork the repository.
+
+# [C#](#tab/csharp)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-csharp.git code-to-cloud
+```
+
+# [Go](#tab/go)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-go.git code-to-cloud
+```
+
+# [JavaScript](#tab/javascript)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-javascript.git code-to-cloud
+```
+
+# [Python](#tab/python)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-python.git code-to-cloud
+```
+++
+Next, change the directory into the root of the cloned repo.
+
+```console
+cd code-to-cloud/src
+```
+
+## Create an Azure resource group
+
+Create a resource group to organize the services related to your container app deployment.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group create \
+ --name $RESOURCE_GROUP \
+ --location "$LOCATION"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup -Location $Location -Name $ResourceGroup
+```
+++
+## Create an Azure Container Registry
+
+Next, create an Azure Container Registry (ACR) instance in your resource group to store the album API container image once it's built.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr create \
+ --resource-group $RESOURCE_GROUP \
+ --name $ACR_NAME \
+ --sku Basic \
+ --admin-enabled true
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$acr = New-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $ACRName -Sku Basic -EnableAdminUser
+```
++++
+## Build your application
+
+With [ACR tasks](../container-registry/container-registry-tasks-overview.md), you can build and push the docker image for the album API without installing Docker locally.
+
+### Build the container with ACR
+
+Run the following command to initiate the image build and push process using ACR. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr build --registry $ACR_NAME --image $API_NAME .
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+az acr build --registry $ACRName --image $APIName .
+```
+++
+Output from the `az acr build` command shows the upload progress of the source code to Azure and the details of the `docker build` and `docker push` operations.
+++
+## Build your application
+
+The following steps, demonstrate how to build your container image locally using Docker and push the image to the new container registry.
+
+### Build the container with Docker
+
+The following command builds a container image for the album API and tags it with the fully qualified name of the ACR login server. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+
+# [Bash](#tab/bash)
+
+```azurecli
+docker build --tag $ACR_NAME.azurecr.io/$API_NAME .
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+docker build --tag "$ACRName.azurecr.io/$APIName" .
+```
+++
+### Push the image to your container registry
+
+First, sign in to your Azure Container Registry.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr login --name $ACR_NAME
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+az acr login --name $ACRName
+```
+++
+Now, push the image to your registry.
+
+# [Bash](#tab/bash)
+
+```azurecli
+docker push $ACR_NAME.azurecr.io/$API_NAME
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+docker push "$ACRName.azurecr.io/$APIName"
+```
++++
+## Create a Container Apps environment
+
+The Azure Container Apps environment acts as a secure boundary around a group of container apps.
+
+Create the Container Apps environment using the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env create \
+ --name $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location "$LOCATION"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'my-album-workspace'
+ ResourceGroupName = $ResourceGroup
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroup -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroup -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $Environment
+ ResourceGroupName = $ResourceGroup
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
+```
+++
+## Deploy your image to a container app
+
+Now that you have an environment created, you can create and deploy your container app with the `az containerapp create` command.
+
+Create and deploy your container app with the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp create \
+ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --environment $ENVIRONMENT \
+ --image $ACR_NAME.azurecr.io/$API_NAME \
+ --target-port 3500 \
+ --ingress 'external' \
+ --registry-server $ACR_NAME.azurecr.io \
+ --query properties.configuration.ingress.fqdn
+```
+
+* By setting `--ingress` to `external`, your container app is accessible from the public internet.
+
+* The `target-port` is set to `3500` to match the port that the container is listening to for requests.
+
+* Without a `query` property, the call to `az containerapp create` returns a JSON response that includes a rich set of details about the application. Adding a query parameter filters the output to just the app's fully qualified domain name (FQDN).
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+To create the container app, create template objects that you pass in as arguments to the `New-AzContainerApp` command.
+
+Create a template object to define your container image parameters.
+
+```azurepowershell
+$ImageParams = @{
+ Name = $APIName
+ Image = $ACRName + '.azurecr.io/' + $APIName + ':latest'
+}
+$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
+```
+
+You need run the following command to get your registry credentials.
+
+```azurepowershell
+$RegistryCredentials = Get-AzContainerRegistryCredential -Name $ACRName -ResourceGroupName $ResourceGroup
+```
+
+Create a registry credential object to define your registry information, and a secret object to define your registry password. The `PasswordSecretRef` refers to the `Name` in the secret object.
+
+```azurepowershell
+$RegistryArgs = @{
+ Server = $ACRName + '.azurecr.io'
+ PasswordSecretRef = 'registrysecret'
+ Username = $RegistryCredentials.Username
+}
+$RegistryObj = New-AzContainerAppRegistryCredentialObject @RegistryArgs
+
+$SecretObj = New-AzContainerAppSecretObject -Name 'registrysecret' -Value $RegistryCredentials.Password
+```
+
+Get your environment ID.
+
+```azurepowershell
+$EnvId = (Get-AzContainerAppManagedEnv -EnvName $Environment -ResourceGroup $ResourceGroup).Id
+```
+
+Create the container app.
+
+```azurepowershell
+$AppArgs = @{
+ Name = $APIName
+ Location = $Location
+ ResourceGroupName = $ResourceGroup
+ ManagedEnvironmentId = $EnvId
+ TemplateContainer = $TemplateObj
+ ConfigurationRegistry = $RegistryObj
+ ConfigurationSecret = $SecretObj
+ IngressTargetPort = 3500
+ IngressExternal = $true
+}
+$MyApp = New-AzContainerApp @AppArgs
+
+# show the app's fully qualified domain name (FQDN).
+$MyApp.IngressFqdn
+```
+
+* By setting `IngressExternal` to `external`, your container app is accessible from the public internet.
+* The `IngressTargetPort` parameter is set to `3500` to match the port that the container is listening to for requests.
+++
+## Verify deployment
+
+Copy the FQDN to a web browser. From your web browser, navigate to the `/albums` endpoint of the FQDN.
++
+## Clean up resources
+
+If you're not going to continue on to the [Communication between microservices](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart. Run the following command to delete the resource group along with all the resources created in this quickstart.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroup -Force
+```
+++
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+This quickstart is the entrypoint for a set of progressive tutorials that showcase the various features within Azure Container Apps. Continue on to learn how to enable communication from a web front end that calls the API you deployed in this article.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Communication between microservices](communicate-between-microservices.md)
container-apps Tutorial Deploy First App Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-deploy-first-app-cli.md
+
+ Title: 'Tutorial: Deploy your first container app'
+description: Deploy your first application to Azure Container Apps.
++++ Last updated : 03/21/2022++
+ms.devlang: azurecli
++
+# Tutorial: Deploy your first container app
+
+The Azure Container Apps service enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
+
+In this tutorial, you create a secure Container Apps environment and deploy your first container app.
+
+> [!NOTE]
+> You can also deploy this app using the [az containerapp up](/cli/azure/containerapp#az_containerapp_up) by following the instructions in the [Quickstart: Deploy your first container app with containerapp up](get-started.md) article. The `az containerapp up` command is a fast and convenient way to build and deploy your app to Azure Container Apps using a single command. However, it doesn't provide the same level of customization for your container app.
++
+## Prerequisites
+
+- An Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
++
+# [Bash](#tab/bash)
+
+To create the environment, run the following command:
+
+```azurecli
+az containerapp env create \
+ --name $CONTAINERAPPS_ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
+```
+++
+## Create a container app
+
+Now that you have an environment created, you can deploy your first container app. With the `containerapp create` command, deploy a container image to Azure Container Apps.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp create \
+ --name my-container-app \
+ --resource-group $RESOURCE_GROUP \
+ --environment $CONTAINERAPPS_ENVIRONMENT \
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --target-port 80 \
+ --ingress 'external' \
+ --query properties.configuration.ingress.fqdn
+```
+
+> [!NOTE]
+> Make sure the value for the `--image` parameter is in lower case.
+
+By setting `--ingress` to `external`, you make the container app available to public requests.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$ImageParams = @{
+ Name = 'my-container-app'
+ Image = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
+}
+$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
+$EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
+
+$AppArgs = @{
+ Name = 'my-container-app'
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ ManagedEnvironmentId = $EnvId
+ IdentityType = 'SystemAssigned'
+ TemplateContainer = $TemplateObj
+ IngressTargetPort = 80
+ IngressExternal = $true
+
+}
+New-AzContainerApp @AppArgs
+```
+
+> [!NOTE]
+> Make sure the value for the `Image` parameter is in lower case.
+
+By setting `IngressExternal` to `$true`, you make the container app available to public requests.
+++
+## Verify deployment
+
+# [Bash](#tab/bash)
+
+The `create` command returns the fully qualified domain name for the container app. Copy this location to a web browser.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Get the fully qualified domain name for the container app.
+
+```azurepowershell
+(Get-AzContainerApp -Name $AppArgs.Name -ResourceGroupName $ResourceGroupName).IngressFqdn
+```
+
+Copy this location to a web browser.
+++
+ The following message is displayed when the container app is deployed:
++
+## Clean up resources
+
+If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this tutorial.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
+```
+++
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Communication between microservices](communicate-between-microservices.md)
container-registry Container Registry Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-skus.md
Azure Container Registry is available in multiple service tiers (also known as S
The Basic, Standard, and Premium tiers all provide the same programmatic capabilities. They also all benefit from [image storage][container-registry-storage] managed entirely by Azure. Choosing a higher-level tier provides more performance and scale. With multiple service tiers, you can get started with Basic, then convert to Standard and Premium as your registry usage increases.
+For example :
+
+- If you purchase a Basic tier registry, it includes a storage of 10 GB. The price you pay here is $0.167 per day. Prices are calculated based on US dollars.
+- If you have a Basic tier registry and use 25 GB storage, you are paying $0.003/day*15 = $0.045 per day for the additional 15 GB.
+- So, the pricing for the Basic ACR with 25 GB storage is $0.167+$0.045= 0.212 USD per day with other related charges like networking, builds, etc, according to the [Pricing - Container Registry.](https://azure.microsoft.com/pricing/details/container-registry/)
++ ## Service tier features and limits The following table details the features and registry limits of the Basic, Standard, and Premium service tiers.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
The only operation possible when the encryption key has been revoked is account
### Assign a new managed-identity to the restored database account to continue accessing or recover access to the database account
+User-Assigned Identity is tied to a specified Cosmos DB account, whenever we assign a User-Assigned Identity to an account, ARM forwards the request to managed service identities to make this connection. Currently we carry over user-identity information from the source database account to the target database account during the restore (for both Continuous and Periodic backup restore) of CMK + User-Assigned Identity,
+
+Since the identity metadata is bound with the source database account and restore workflow doesn't re-scope identity to the target database account. This will cause the restored database accounts to be in a bad state, and become inaccessible after the source account is deleted and identityΓÇÖs renew time is expired.
+ Steps to assign a new managed-identity: 1. [Create a new user-assigned managed identity.](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) 2. [Grant KeyVault key access to this identity.](#choosing-the-preferred-security-model)
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
description: This article explains how to group costs using tag inheritance. Previously updated : 02/21/2023 Last updated : 03/30/2023
Tag inheritance is available for the following billing account types:
- Microsoft Customer Agreement (MCA) - Microsoft Partner Agreement (MPA) with Azure plan subscriptions
+Here's an example diagram showing how a tag is inherited.
++ ## Required permissions - For subscriptions:
Tag inheritance is available for the following billing account types:
You can enable the tag inheritance setting in the Azure portal. You apply the setting at the EA billing account, MCA billing profile, and subscription scopes. After the setting is enabled, all resource group and subscription tags are automatically applied to child resource usage records.
-To enable tag inheritance in the Azure portal:
+### To enable tag inheritance in the Azure portal for an EA billing account
1. In the Azure portal, search for **Cost Management** and select it (the green hexagon-shaped symbol, *not* Cost Management + Billing).
-2. Select a scope.
-3. In the left menu under **Settings**, select either **Manage billing account** or **Manage subscription**, depending on your scope.
-4. Under **Tag inheritance**, select **Edit**.
- :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance.png" alt-text="Screenshot showing the Edit option for Tag inheritance." :::
-5. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
- :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option." :::
+1. Select a scope.
+1. In the left menu under **Settings**, select **Manage billing account**.
+1. Under **Tag inheritance**, select **Edit**.
+ :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance.png" alt-text="Screenshot showing the Edit option for Tag inheritance for an EA billing account." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance.png" :::
+1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+ :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a billing account." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png":::
-Here's an example diagram showing how a tag is inherited.
+### To enable tag inheritance in the Azure portal for an MCA billing profile
+1. In the Azure portal, search for **Cost Management** and select it (the green hexagon-shaped symbol, *not* Cost Management + Billing).
+1. Select a scope.
+1. In the left menu under **Settings**, select **Manage billing profile**.
+1. Under **Tag inheritance**, select **Edit**.
+ :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance-billing-profile.png" alt-text="Screenshot showing the Edit option for Tag inheritance for an MCA billing profile." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance-billing-profile.png":::
+1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+ :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-billing-profile.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a billing profile." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-billing-profile.png":::
+
+### To enable tag inheritance in the Azure portal for a subscription
+
+1. In the Azure portal, search for **Cost Management** and select it (the green hexagon-shaped symbol, *not* Cost Management + Billing).
+1. Select a subscription scope.
+1. In the left menu under **Settings**, select **Manage subscription**.
+1. Under **Tag inheritance**, select **Edit**.
+ :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance-subscription.png" alt-text="Screenshot showing the Edit option for Tag inheritance for a subscription." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance-subscription.png":::
+1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+ :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-subscription.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a subscription." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-subscription.png":::
## Choose between resource and inherited tags
data-factory Connector Azure Cosmos Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-analytical-store.md
+
+ Title: Copy and transform data in Azure Cosmos DB analytical store
+
+description: Learn how to transform data in Azure Cosmos DB analytical store using Azure Data Factory and Azure Synapse Analytics.
++++++ Last updated : 03/31/2023++
+# Copy and transform data in Azure Cosmos DB analytical store by using Azure Data Factory
+
+> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
+> * [Current version](connector-azure-cosmos-analytical-store.md)
++
+This article outlines how to use Data Flow to transform data in Azure Cosmos DB analytical store. To learn more, read the introductory articles for [Azure Data Factory](introduction.md) and [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+>[!NOTE]
+>The Azure Cosmos DB analytical store connector supports [change data capture](concepts-change-data-capture.md) Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for Mongo DB, currently in public preview.
+
+## Supported capabilities
+
+This Azure Cosmos DB for NoSQL connector is supported for the following capabilities:
+
+| Supported capabilities|IR | Managed private endpoint|
+|| --| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
++
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
++
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read and write to collections in Azure Cosmos DB. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
+
+> [!Note]
+> The Azure Cosmos DB analytical store is found with the [Azure Cosmos DB for NoSQL](connector-azure-cosmos-db.md) dataset type.
++
+### Source transformation
+
+Settings specific to Azure Cosmos DB are available in the **Source Options** tab of the source transformation.
+
+**Include system columns:** If true, ```id```, ```_ts```, and other system columns will be included in your data flow metadata from Azure Cosmos DB. When updating collections, it is important to include this so that you can grab the existing row ID.
+
+**Page size:** The number of documents per page of the query result. Default is "-1" which uses the service dynamic page up to 1000.
+
+**Throughput:** Set an optional value for the number of RUs you'd like to apply to your Azure Cosmos DB collection for each execution of this data flow during the read operation. Minimum is 400.
+
+**Preferred regions:** Choose the preferred read regions for this process.
+
+**Change feed:** If true, you will get data from [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) which is a persistent record of changes to a container in the order they occur from last run automatically. When you set it true, do not set both **Infer drifted column types** and **Allow schema drift** as true at the same time. For more details, see [Azure Cosmos DB change feed](#azure-cosmos-db-change-feed).
+
+**Start from beginning:** If true, you will get initial load of full snapshot data in the first run, followed by capturing changed data in next runs. If false, the initial load will be skipped in the first run, followed by capturing changed data in next runs. The setting is aligned with the same setting name in [Azure Cosmos DB reference](https://github.com/Azure/azure-cosmosdb-spark/wiki/Configuration-references#reading-cosmosdb-collection-change-feed). For more details, see [Azure Cosmos DB change feed](#azure-cosmos-db-change-feed).
+
+### Sink transformation
+
+Settings specific to Azure Cosmos DB are available in the **Settings** tab of the sink transformation.
+
+**Update method:** Determines what operations are allowed on your database destination. The default is to only allow inserts. To update, upsert, or delete rows, an alter-row transformation is required to tag rows for those actions. For updates, upserts and deletes, a key column or columns must be set to determine which row to alter.
+
+**Collection action:** Determines whether to recreate the destination collection prior to writing.
+* None: No action will be done to the collection.
+* Recreate: The collection will get dropped and recreated
+
+**Batch size**: An integer that represents how many objects are being written to Azure Cosmos DB collection in each batch. Usually, starting with the default batch size is sufficient. To further tune this value, note:
+
+- Azure Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you hit error saying "Request size is too large", reduce the batch size value.
+- The larger the batch size, the better throughput the service can achieve, while make sure you allocate enough RUs to empower your workload.
+
+**Partition key:** Enter a string that represents the partition key for your collection. Example: ```/movies/title```
+
+**Throughput:** Set an optional value for the number of RUs you'd like to apply to your Azure Cosmos DB collection for each execution of this data flow. Minimum is 400.
+
+**Write throughput budget:** An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total throughput allocated to the collection.
+
+## Azure Cosmos DB change feed
+
+Azure Data Factory can get data from [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) by enabling it in the mapping data flow source transformation. With this connector option, you can read change feeds and apply transformations before loading transformed data into destination datasets of your choice. You do not have to use Azure functions to read the change feed and then write custom transformations. You can use this option to move data from one container to another, prepare change feed driven material views for fit purpose or automate container backup or recovery based on change feed, and enable many more such use cases using visual drag and drop capability of Azure Data Factory.
+
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run.
+
+When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
+
+In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+
+In addition, Azure Cosmos DB analytical store now supports Change Data Capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for Mongo DB (public preview). Azure Cosmos DB analytical store allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store.
+
+## Next steps
+Get started with [change data capture in Azure Cosmos DB analytical store ](../cosmos-db/get-started-change-data-capture.md).
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
addDays('<timestamp>', <days>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*days*> | Yes | Integer | The positive or negative number of days to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
addHours('<timestamp>', <hours>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*hours*> | Yes | Integer | The positive or negative number of hours to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
addMinutes('<timestamp>', <minutes>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*minutes*> | Yes | Integer | The positive or negative number of minutes to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
addSeconds('<timestamp>', <seconds>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*seconds*> | Yes | Integer | The positive or negative number of seconds to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
addToTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*interval*> | Yes | Integer | The number of specified time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
convertFromUtc('<timestamp>', '<destinationTimeZone>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Time Zone Values](/windows-hardware/manufacture/desktop/default-time-zones#time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
convertTimeZone('<timestamp>', '<sourceTimeZone>', '<destinationTimeZone>', '<fo
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Time Zone Values](/windows-hardware/manufacture/desktop/default-time-zones#time-zones), but you might have to remove any punctuation from the time zone name. | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Time Zone Values](/windows-hardware/manufacture/desktop/default-time-zones#time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
convertToUtc('<timestamp>', '<sourceTimeZone>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Time Zone Values](/windows-hardware/manufacture/desktop/default-time-zones#time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
formatDateTime('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
getFutureTime(<interval>, <timeUnit>, <format>?)
| | -- | - | -- | | <*interval*> | Yes | Integer | The number of specified time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
getPastTime(<interval>, <timeUnit>, <format>?)
| | -- | - | -- | | <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
startOfDay('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
startOfHour('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
startOfMonth('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
subtractFromTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
Optionally, you can specify a different format with the <*format*> parameter.
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
data-manager-for-agri How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-private-links.md
By using Azure Private Link, you can connect to an Azure Data Manager for Agricu
This article describes how to create a private endpoint and approval process for Azure Data Manager for Agriculture Preview.
+## Prerequisites
+
+[Create a virtual network](../virtual-network/quick-create-portal.md) in the same subscription as the Azure Data Manager for Agriculture Preview instance. This virtual network will allow automatic approval of the Private Link endpoint.
+ ## How to set up a private endpoint Private Endpoints can be created using the Azure portal, PowerShell, or the Azure CLI:
databox-online Azure Stack Edge Gpu 2303 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2303-release-notes.md
+
+ Title: Azure Stack Edge 2303 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2303 release.
++
+
+++ Last updated : 03/31/2023+++
+# Azure Stack Edge 2303 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2303 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2303** release, which maps to software version **2.2.2257.1113**.
+
+## Supported update paths
+
+This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2026.5318).
+
+You can update to the latest version using the following update paths:
+
+| Current version | Update to | Then apply |
+| --| --| --|
+|2205 and earlier |2207 |2303
+|2207 and later |2303 |
+
+## What's new
+
+The 2303 release has the following new features and enhancements:
+
+- Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible.
+- You can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md).
+
+## Issues fixed in this release
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|Core Azure Stack Edge platform and Azure Kubernetes Service (AKS) on Azure Stack Edge |Critical bug fixes to improve workload availability during two-node Azure Stack Edge update of core Azure Stack Edge platform and AKS on Azure Stack Edge. |
+
+<!--## Known issues in this release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Need known issues in 2303 |-->
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. |
+|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution does not stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). |
+|**27.**|AKS on Azure Stack Edge |When you update your AKS on Azure Stack Edge deployment from a previous preview version to 2303 release, there is an additional nodepool rollout. |The update may take longer. |
+|**28.**|Azure portal |When the Arc deployment fails in this release, you will see a generic *NO PARAM* error code, as all the errors are not propagated in the portal. |There is no workaround for this behavior in this release. |
+|**29.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you will need to delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. |
+|**30.**|AKS on Azure Stack Edge |In this release, attaching the PVC takes a long time. As a result, some pods that use persistent volumes (PVs) come up slowly after the host reboots. |A workaround is to restart the nodepool VM by connecting via the Windows PowerShell interface of the device. |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 01/31/2023 Last updated : 03/30/2023 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
## About latest updates
-The current update is Update 2301. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
+The current update is Update 2303. This update installs two updates, the device update followed by Kubernetes updates.
-- Device software version: Azure Stack Edge 2310 (2.2.2162.730)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2301 (2.2.2162.730)
+The associated versions for this update are:
+
+- Device software version: Azure Stack Edge 2303 (2.2.2257.1113)
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2303 (2.2.2257.1113)
- Kubernetes server version: v1.24.6 - IoT Edge version: 0.1.0-beta15 - Azure Arc version: 1.8.14 - GPU driver version: 515.65.01 - CUDA version: 11.7
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2209-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2303-release-notes.md).
-**To apply 2301 update, your device must be running version 2207 or later.**
+**To apply 2303 update, your device must be running version 2207 or later.**
- If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.* -- You can update to 2207 from 2106 or later, and then install 2301.
+- You can update to 2207 from 2106 or later, and then install 2303.
### Update Azure Kubernetes service on Azure Stack Edge > [!IMPORTANT] > Use the following procedure only if you are an SAP or a PMEC customer.
-If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2301.
+If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2303.
-Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2301:
+Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2303:
-1. Update your device version to 2301.
+1. Update your device version to 2303.
1. Update your Kubernetes version to 2210.
-1. Update your Kubernetes version to 2301.
+1. Update your Kubernetes version to 2303.
-If you are running 2210, you can update both your device version and Kubernetes version directly to 2301.
+If you are running 2210, you can update both your device version and Kubernetes version directly to 2303.
-In Azure portal, the process will require two clicks, the first update gets your device version to 2301 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2301.
+In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2303.
-From the local UI, you will have to run each update separately: update the device version to 2301, then update Kubernetes version to 2210, and then update Kubernetes version to 2301.
+From the local UI, you will have to run each update separately: update the device version to 2303, then update Kubernetes version to 2210, and then update Kubernetes version to 2303.
### Updates for a single-node vs two-node
Do the following steps to download the update from the Microsoft Update Catalog.
2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**.
- The update listing appears as **Azure Stack Edge Update 2301**.
+ The update listing appears as **Azure Stack Edge Update 2303**.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
This procedure takes around 20 minutes to complete. Perform the following steps
5. The update starts. After the device is successfully updated, it restarts. The local UI is not accessible in this duration.
-6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2301**.
+6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2303**.
7. You will now update the Kubernetes software version. Select the remaining three Kubernetes files together (file with the *Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe* suffix) and repeat the above steps to apply update.
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
If you're having issues with Defender for DevOps these frequently asked question
- [I donΓÇÖt see the results for my ADO projects in Microsoft Defender for Cloud](#i-dont-see-the-results-for-my-ado-projects-in-microsoft-defender-for-cloud) - [Why is my Azure DevOps repository not refreshing to healthy?](#why-is-my-azure-devops-repository-not-refreshing-to-healthy) - [I donΓÇÖt see Recommendations for findings](#i-dont-see-recommendations-for-findings)-- [What information does Defender for DevOps store about me and my enterprise, and where is the data stored?](#what-information-does-defender-for-devops-store-about-me-and-my-enterprise-and-where-is-the-data-stored)
+- [What information does Defender for DevOps store about me and my enterprise, and where is the data stored and processed?](#what-information-does-defender-for-devops-store-about-me-and-my-enterprise-and-where-is-the-data-stored-and-processed)
- [Why are Delete source code and Write Code permissions required for Azure DevOps?](#why-are-delete-source-and-write-code-permissions-required-for-azure-devops) - [Is Exemptions capability available and tracked for app sec vulnerability management](#is-exemptions-capability-available-and-tracked-for-app-sec-vulnerability-management) - [Is continuous, automatic scanning available?](#is-continuous-automatic-scanning-available)
Ensure that you've onboarded the project with the connector and that your reposi
You must have more than a [stakeholder license](https://azure.microsoft.com/pricing/details/devops/azure-devops-services/) to the repos to onboard them, and you need to be at least the security reader on the subscription where the connector is created. You can confirm if you've onboarded the repositories by seeing them in the inventory list in Microsoft Defender for Cloud.
-### What information does Defender for DevOps store about me and my enterprise, and where is the data stored?
+### What information does Defender for DevOps store about me and my enterprise, and where is the data stored and processed?
-Data Defender for DevOps connects to your source code management system, for example, Azure DevOps, GitHub, to provide a central console for your DevOps resources and security posture. Defender for DevOps processes and stores the following information:
+Defender for DevOps connects to your source code management system, for example, Azure DevOps, GitHub, to provide a central console for your DevOps resources and security posture. Defender for DevOps processes and stores the following information:
- Metadata on your connected source code management systems and associated repositories. This data includes user, organizational, and authentication information. - Scan results for recommendations and assessments results and details.
-Data is stored within the region your connector is created in. You should consider which region to create your connector in, for any data residency requirements as you design and create your DevOps connector.
+Data is stored within the region your connector is created in and flows into [Microsoft Defender for Cloud](defender-for-cloud-introduction.md). You should consider which region to create your connector in, for any data residency requirements as you design and create your DevOps connector.
Defender for DevOps currently doesn't process or store your code, build, and audit logs.
You can learn more about [Microsoft Security DevOps](https://marketplace.visuals
## Next steps -- [Overview of Defender for DevOps](defender-for-devops-introduction.md)
+- [Overview of Defender for DevOps](defender-for-devops-introduction.md)
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
If your organization list is empty in the UI after you onboarded an Azure DevOps
For information on how to correct this issue, check out the [DevOps trouble shooting guide](troubleshooting-guide.md#troubleshoot-azure-devops-organization-connector-issues).
+### I have a large Azure DevOps organization with many repositories. Can I still onboard?
+
+Yes, there is no limit to how many Azure DevOps repositories you can onboard to Defender for DevOps.
+
+However, there are two main implications when onboarding large organizations ΓÇô speed and throttling. The speed of discovery for your DevOps repositories is determined by the number of projects for each connector (approximately 100 projects per hour). Throttling can happen because Azure DevOps API calls have a [global rate limit](https://learn.microsoft.com/azure/devops/integrate/concepts/rate-limits?view=azure-devops) and we limit the calls for project discovery to use a small portion of overall quota limits.
+
+Consider using an alternative Azure DevOps identity (i.e., an Organization Administrator account used as a service account) to avoid individual accounts from being throttled when onboarding large organizations. Below are some scenarios of when to use an alternate identity to onboard a Defender for DevOps connector:
+- Large number of Azure DevOps Organizations and Projects (~500 Projects or more).
+- Large number of concurrent builds which peak during work hours.
+- Authorized user is a [Power Platform](https://learn.microsoft.com/power-platform/) user making additional Azure DevOps API calls, using up the global rate limit quotas.
+
+Once you have onboarded the Azure DevOps repositories using this account and [configured and ran the Microsoft Security DevOps Azure DevOps extension](https://learn.microsoft.com/azure/defender-for-cloud/azure-devops-extension) in your CI/CD pipeline, then the scanning results will appear near instantaneously in Microsoft Defender for Cloud.
+ ## Next steps Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md
description: Learn about Azure Digital Twins security best practices. Previously updated : 02/02/2023 Last updated : 03/31/2023
You can use either of these managed identity types to authenticate to a [custom-
For instructions on how to enable a managed identity for an Azure Digital Twins endpoint that can be used to route events, see [Endpoint options: Identity-based authentication](how-to-create-endpoints.md#endpoint-options-identity-based-authentication).
+### Using trusted Microsoft service for routing events to Event Hubs and Service Bus endpoints
+
+Azure Digital Twins can connect to Event Hubs and Service Bus endpoints for sending event data, using those resources' public endpoints. However, if those resources are bound to a VNet, connectivity to the resources are blocked by default. As a result, this configuration prevents Azure Digital Twins from sending event data to your resources.
+
+To resolve this, enable connectivity from your Azure Digital Twins instance to your Event Hubs or Service Bus resources through the the *trusted Microsoft service* option (see [Trusted Microsoft services for Event Hubs](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services) and [Trusted Microsoft services for Service Bus](../service-bus-messaging/service-bus-service-endpoints.md#trusted-microsoft-services)).
+
+You'll need to complete the following steps to enable the trusted Microsoft service connection.
+
+1. Your Azure Digital Twins instance must use a **system-assigned managed identity**. This allows other services to find your instance as a trusted Microsoft service. For instructions to set up a system-managed identity on the instance, see [Enable managed identity for the instance](how-to-create-endpoints.md#1-enable-managed-identity-for-the-instance).
+1. Once a system-assigned managed identity is provisioned, grant permission for your instance's managed identity to access your Event Hubs or Service Bus endpoint (this feature is not supported in Event Grid). For instructions to assign the proper roles, see [Assign Azure roles to the identity](how-to-create-endpoints.md#2-assign-azure-roles-to-the-identity).
+1. For Event Hubs and Service Bus endpoints that have firewall configurations in place, make sure you enable the **Allow trusted Microsoft services to bypass this firewall** setting.
+ ## Private network access with Azure Private Link [Azure Private Link](../private-link/private-link-overview.md) is a service that enables you to access Azure resources (like [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Storage](../storage/common/storage-introduction.md), and [Azure Cosmos DB](../cosmos-db/introduction.md)) and Azure-hosted customer and partner services over a private endpoint in your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
education-hub Custom Tenant Set Up Classroom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/custom-tenant-set-up-classroom.md
+
+ Title: How to create a custom Azure for Classroom Tenant and Billing Profile
+description: This article shows you how to make a custom tenant and billing profile for educators in your organization
++++ Last updated : 3/17/2023+++
+# Create a custom Tenant and Billing Profile for Microsoft for Teaching Paid
+
+This article is meant for IT Admins utilizing Azure for Classroom. When signing up for this offer, you should already have a tenant and billing profile created, but this article is meant to help walk you through how to create a custom tenant and billing profile and associate them with an educator.
+
+## Prerequisites
+
+- Be signed up for Azure for Classroom
+
+## Create a new tenant
+
+This section walks you through how to create a new tenant and associate it with your university tenant using multi-tenant
+
+1. Go to the Azure portal and search for "Azure Active Directory"
+2. Create a new tenant in the "Manage tenants" tab
+3. Fill in and Finalize Tenant information
+4. After the tenant has been created copy the Tenant ID of the new tenant
+
+## Associate new tenant with university tenant
+
+1. Go to "Cost Management" and click on "Access control (IAM)
+2. Click on "Associated billing tenants"
+3. Click "Add" and add the Tenant ID of the newly created tenant
+4. Check the box for Billing management
+1. Click "Add" to finalize the association between the newly created tenant and university tenant
+
+## Invite Educator to the newly created tenant
+
+This section walks through how to add an Educator to the newly created tenant.
+
+1. Switch tenants to the newly created tenant
+2. Go to "Users" in the new tenant
+3. Invite a user to this tenant
+1. Change the role to "Global administrator"
+1. Tell the Educator to accept the invitation to this tenant
+2. After the Educator has joined the tenant, go into the tenant properties and click Yes under the Access management for Azure resources.
+
+Now that you've created a custom Tenant, you can go into Education Hub and begin distributing credit to Educators to use in labs.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create an assignment and allocate credit](create-assignment-allocate-credit.md)
event-grid Advanced Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/advanced-filtering.md
- Title: Advanced filtering - Azure Event Grid IoT Edge | Microsoft Docs
-description: Advanced filtering in Event Grid on IoT Edge.
--- Previously updated : 02/15/2022---
-# Advanced filtering
-Event Grid allows specifying filters on any property in the json payload. These filters are modeled as set of `AND` conditions, with each outer condition having optional inner `OR` conditions. For each `AND` condition, you specify the following values:
-
-* `OperatorType` - The type of comparison.
-* `Key` - The json path to the property on which to apply the filter.
-* `Value` - The reference value against which the filter is run (or) `Values` - The set of reference values against which the filter is run.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-## JSON syntax
-
-The JSON syntax for an advanced filter is as follows:
-
-```json
-{
- "filter": {
- "advancedFilters": [{
- "operatorType": "NumberGreaterThanOrEquals",
- "key": "Data.Key1",
- "value": 5
- }, {
- "operatorType": "StringContains",
- "key": "Subject",
- "values": ["container1", "container2"]
- }
- ]
- }
-}
-```
-
-## Filtering on array values
-
-Event Grid doesn't support filtering on an array of values today. If an incoming event has an array value for the advanced filter's key, the matching operation fails. The incoming event ends up not matching with the event subscription.
-
-## AND-OR-NOT semantics
-
-Notice that in the json example given earlier, `AdvancedFilters` is an array. Think of each `AdvancedFilter` array element as an `AND` condition.
-
-For the operators that support multiple values (such as `NumberIn`, `NumberNotIn`, `StringIn`, etc.), each value is treated as an `OR` condition. So, a `StringBeginsWith("a", "b", "c")` will match any string value that starts with either `a` or `b` or `c`.
-
-> [!CAUTION]
-> The NOT operators - `NumberNotIn` and `StringNotIn` behave as AND conditions on each value given in the `Values` field.
->
-> Not doing so will make the filter an Accept-All filter and defeat the purpose of filtering.
-
-## Floating-point rounding behavior
-
-Event Grid uses the `decimal` .NET type to handle all numeric values. The number values specified in the event subscription JSON aren't subject to floating point rounding behavior.
-
-## Case sensitivity of string filters
-
-All string comparisons are case-insensitive. There's no way to change this behavior today.
-
-## Allowed advanced filter keys
-
-The `Key` property can either be a well-known top-level property, or be a json path with multiple dots, where each dot signifies stepping into a nested json object.
-
-Event Grid doesn't have any special meaning for the `$` character in the Key, unlike the JSONPath specification.
-
-### Event Grid schema
-
-For events in the Event Grid schema:
-
-* ID
-* Topic
-* Subject
-* EventType
-* DataVersion
-* Data.Prop1
-* Data.Prop*Prop2.Prop3.Prop4.Prop5
-
-### Custom event schema
-
-There's no restriction on the `Key` in custom event schema since Event Grid doesn't enforce any envelope schema on the payload.
-
-## Numeric single-value filter examples
-
-* NumberGreaterThan
-* NumberGreaterThanOrEquals
-* NumberLessThan
-* NumberLessThanOrEquals
-
-```json
-{
- "filter": {
- "advancedFilters": [
- {
- "operatorType": "NumberGreaterThan",
- "key": "Data.Key1",
- "value": 5
- },
- {
- "operatorType": "NumberGreaterThanOrEquals",
- "key": "Data.Key2",
- "value": *456
- },
- {
- "operatorType": "NumberLessThan",
- "key": "Data.P*P2.P3",
- "value": 1000
- },
- {
- "operatorType": "NumberLessThanOrEquals",
- "key": "Data.P*P2",
- "value": 999
- }
- ]
- }
-}
-```
-
-## Numeric range-value filter examples
-
-* NumberIn
-* NumberNotIn
-
-```json
-{
- "filter": {
- "advancedFilters": [
- {
- "operatorType": "NumberIn",
- "key": "Data.Key1",
- "values": [1, 10, 100]
- },
- {
- "operatorType": "NumberNotIn",
- "key": "Data.Key2",
- "values": [2, 3, 4.56]
- }
- ]
- }
-}
-```
-
-## String range-value filter examples
-
-* StringContains
-* StringBeginsWith
-* StringEndsWith
-* StringIn
-* StringNotIn
-
-```json
-{
- "filter": {
- "advancedFilters": [
- {
- "operatorType": "StringContains",
- "key": "Data.Key1",
- "values": ["microsoft", "azure"]
- },
- {
- "operatorType": "StringBeginsWith",
- "key": "Data.Key2",
- "values": ["event", "grid"]
- },
- {
- "operatorType": "StringEndsWith",
- "key": "Data.P3.P4",
- "values": ["jpg", "jpeg", "png"]
- },
- {
- "operatorType": "StringIn",
- "key": "RootKey",
- "values": ["exact", "string", "matches"]
- },
- {
- "operatorType": "StringNotIn",
- "key": "RootKey",
- "values": ["aws", "bridge"]
- }
- ]
- }
-}
-```
-
-## Boolean single-value filter examples
-
-* BoolEquals
-
-```json
-{
- "filter": {
- "advancedFilters": [
- {
- "operatorType": "BoolEquals",
- "key": "BoolKey1",
- "value": true
- },
- {
- "operatorType": "BoolEquals",
- "key": "BoolKey2",
- "value": false
- }
- ]
- }
-}
-```
event-grid Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/api.md
- Title: REST API - Azure Event Grid IoT Edge | Microsoft Docs
-description: REST API on Event Grid on IoT Edge.
----- Previously updated : 02/15/2022----
-# REST API
-This article describes the REST APIs of Azure Event Grid on IoT Edge
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-## Common API behavior
-
-### Base URL
-Event Grid on IoT Edge has the following APIs exposed over HTTP (port 5888) and HTTPS (port 4438).
-
-* Base URL for HTTP: http://eventgridmodule:5888
-* Base URL for HTTPS: https://eventgridmodule:4438
-
-### Request query string
-All API requests require the following query string parameter:
-
-`?api-version=2019-01-01-preview`
-
-### Request content type
-All API requests must have a **Content-Type**.
-
-In case of **EventGridSchema** or **CustomSchema**, the value of Content-Type can be one of the following values:
-
-`Content-Type: application/json`
-
-`Content-Type: application/json; charset=utf-8`
-
-In case of **CloudEventSchemaV1_0** in structured mode, the value of Content-Type can be one of the following values:
-
-`Content-Type: application/cloudevents+json`
-
-`Content-Type: application/cloudevents+json; charset=utf-8`
-
-`Content-Type: application/cloudevents-batch+json`
-
-`Content-Type: application/cloudevents-batch+json; charset=utf-8`
-
-In case of **CloudEventSchemaV1_0** in binary mode, refer to [documentation](https://github.com/cloudevents/spec/blob/main/cloudevents/bindings/http-protocol-binding.md) for details.
-
-### Error response
-All APIs return an error with the following payload:
-
-```json
-{
- "error":
- {
- "code": "<HTTP STATUS CODE>",
- "details":
- {
- "code": "<Detailed Error Code>",
- "message": "..."
- }
- }
-}
-```
-
-## Manage topics
-
-### Put topic (create / update)
-
-**Request**: ``` PUT /topics/<topic_name>?api-version=2019-01-01-preview ```
-
-**Payload**:
-
-```json
- {
- "name": "<topic_name>", // optional, inferred from URL. If specified must match URL topic_name
- "properties":
- {
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0" // optional
- }
- }
-```
-
-**Response**: HTTP 200
-
-**Payload**:
-
-```json
-{
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>",
- "name": "<topic_name>",
- "type": "Microsoft.EventGrid/topics",
- "properties":
- {
- "endpoint": "<get_request_base_url>/topics/<topic_name>/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0" // populated with EventGridSchema if not explicitly specified in PUT request
- }
-}
-```
-
-### Get topic
-
-**Request**: ``` GET /topics/<topic_name>?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200
-
-**Payload**:
-```json
-{
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>",
- "name": "<topic_name>",
- "type": "Microsoft.EventGrid/topics",
- "properties":
- {
- "endpoint": "<request_base_url>/topics/<topic_name>/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0"
- }
-}
-```
-
-### Get all topics
-
-**Request**: ``` GET /topics?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200
-
-**Payload**:
-```json
-[
- {
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>",
- "name": "<topic_name>",
- "type": "Microsoft.EventGrid/topics",
- "properties":
- {
- "endpoint": "<request_base_url>/topics/<topic_name>/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0"
- }
- },
- {
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>",
- "name": "<topic_name>",
- "type": "Microsoft.EventGrid/topics",
- "properties":
- {
- "endpoint": "<request_base_url>/topics/<topic_name>/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0"
- }
- }
-]
-```
-
-### Delete topic
-
-**Request**: ``` DELETE /topics/<topic_name>?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200, empty payload
-
-## Manage event subscriptions
-Samples in this section use `EndpointType=Webhook;`. The json samples for `EndpointType=EdgeHub / EndpointType=EventGrid` are in the next section.
-
-### Put event subscription (create / update)
-
-**Request**: ``` PUT /topics/<topic_name>/eventSubscriptions/<subscription_name>?api-version=2019-01-01-preview ```
-
-**Payload**:
-```json
-{
- "name": "<subscription_name>", // optional, inferred from URL. If specified must match URL subscription_name
- "properties":
- {
- "topicName": "<topic_name>", // optional, inferred from URL. If specified must match URL topic_name
- "eventDeliverySchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0", // optional
- "retryPolicy": //optional
- {
- "eventExpiryInMinutes": 120,
- "maxDeliveryAttempts": 50
- },
- "persistencePolicy": "true",
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<webhook_url>",
- "maxEventsPerBatch": 10, // optional
- "preferredBatchSizeInKilobytes": 1033 // optional
- }
- },
- "filter": // optional
- {
- "subjectBeginsWith": "...",
- "subjectEndsWith": "...",
- "isSubjectCaseSensitive": true|false,
- "includedEventTypes": ["...", "..."],
- "advancedFilters":
- [
- {
- "OperatorType": "BoolEquals",
- "Key": "...",
- "Value": "..."
- },
- {
- "OperatorType": "NumberLessThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberLessThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "NumberNotIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "StringIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringNotIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringBeginsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringEndsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringContains",
- "Key": "...",
- "Values": ["...", "...", "..."]
- }
- ]
- }
- }
-}
-```
-
-**Response**: HTTP 200
-
-**Payload**:
-
-```json
-{
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>/eventSubscriptions/<subscription_name>",
- "name": "<subscription_name>",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "properties":
- {
- "topicName": "<topic_name>",
- "eventDeliverySchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0", // populated with EventGridSchema if not explicitly specified in PUT request
- "retryPolicy": // only populated if specified in the PUT request
- {
- "eventExpiryInMinutes": 120,
- "maxDeliveryAttempts": 50
- },
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<webhook_url>",
- "maxEventsPerBatch": 10, // optional
- "preferredBatchSizeInKilobytes": 1033 // optional
- }
- },
- "filter": // only populated if specified in the PUT request
- {
- "subjectBeginsWith": "...",
- "subjectEndsWith": "...",
- "isSubjectCaseSensitive": true|false,
- "includedEventTypes": ["...", "..."],
- "advancedFilters":
- [
- {
- "OperatorType": "BoolEquals",
- "Key": "...",
- "Value": "..."
- },
- {
- "OperatorType": "NumberLessThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberLessThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "NumberNotIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "StringIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringNotIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringBeginsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringEndsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringContains",
- "Key": "...",
- "Values": ["...", "...", "..."]
- }
- ]
- }
- }
-}
-```
--
-### Get event subscription
-
-**Request**: ``` GET /topics/<topic_name>/eventSubscriptions/<subscription_name>?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200
-
-**Payload**:
-```json
-{
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>/eventSubscriptions/<subscription_name>",
- "name": "<subscription_name>",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "properties":
- {
- "topicName": "<topic_name>",
- "eventDeliverySchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0", // populated with EventGridSchema if not explicitly specified in PUT request
- "retryPolicy": // only populated if specified in the PUT request
- {
- "eventExpiryInMinutes": 120,
- "maxDeliveryAttempts": 50
- },
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<webhook_url>",
- "maxEventsPerBatch": 10, // optional
- "preferredBatchSizeInKilobytes": 1033 // optional
- }
- },
- "filter": // only populated if specified in the PUT request
- {
- "subjectBeginsWith": "...",
- "subjectEndsWith": "...",
- "isSubjectCaseSensitive": true|false,
- "includedEventTypes": ["...", "..."],
- "advancedFilters":
- [
- {
- "OperatorType": "BoolEquals",
- "Key": "...",
- "Value": "..."
- },
- {
- "OperatorType": "NumberLessThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberLessThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "NumberNotIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "StringIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringNotIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringBeginsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringEndsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringContains",
- "Key": "...",
- "Values": ["...", "...", "..."]
- }
- ]
- }
- }
-}
-```
-
-### Get event subscriptions
-
-**Request**: ``` GET /topics/<topic_name>/eventSubscriptions?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200
-
-**Payload**:
-```json
-[
- {
- // same event-subscription json as that returned from Get-EventSubscription above
- },
- {
- },
- ...
-]
-```
-
-### Delete event subscription
-
-**Request**: ``` DELETE /topics/<topic_name>/eventSubscriptions/<subscription_name>?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200, no payload
--
-## Publish events API
-
-### Send batch of events (in Event Grid schema)
-
-**Request**: ``` POST /topics/<topic_name>/events?api-version=2019-01-01-preview ```
-
-```json
-[
- {
- "id": "<user-defined-event-id>",
- "topic": "<topic_name>",
- "subject": "",
- "eventType": "",
- "eventTime": ""
- "dataVersion": "",
- "metadataVersion": "1",
- "data":
- ...
- }
-]
-```
-
-**Response**: HTTP 200, empty payload
--
-**Payload field descriptions**
-- ```Id``` is mandatory. It can be any string value that's populated by the caller. Event Grid does NOT do any duplicate detection or enforce any semantics on this field.-- ```Topic``` is optional, but if specified must match the topic_name from the request URL-- ```Subject``` is mandatory, can be any string value-- ```EventType``` is mandatory, can be any string value-- ```EventTime``` is mandatory, it's not validated but should be a proper DateTime.-- ```DataVersion``` is mandatory-- ```MetadataVersion``` is optional, if specified it MUST be a string with the value ```"1"```-- ```Data``` is optional, and can be any JSON token (number, string, boolean, array, object)-
-### Send batch of events (in custom schema)
-
-**Request**: ``` POST /topics/<topic_name>/events?api-version=2019-01-01-preview ```
-
-```json
-[
- {
- ...
- }
-]
-```
-
-**Response**: HTTP 200, empty payload
--
-**Payload Restrictions**
-- MUST be an array of events.-- Each array entry MUST be a JSON object.-- No other constraints (other than payload size).-
-## Examples
-
-### Set up topic with EventGrid schema
-Sets up a topic to require events to be published in **eventgridschema**.
-
-```json
- {
- "name": "myeventgridtopic",
- "properties":
- {
- "inputSchema": "EventGridSchema"
- }
- }
-```
-
-### Set up topic with custom schema
-Sets up a topic to require events to be published in `customschema`.
-
-```json
- {
- "name": "mycustomschematopic",
- "properties":
- {
- "inputSchema": "CustomSchema"
- }
- }
-```
-
-### Set up topic with cloud event schema
-Sets up a topic to require events to be published in `cloudeventschema`.
-
-```json
- {
- "name": "mycloudeventschematopic",
- "properties":
- {
- "inputSchema": "CloudEventSchemaV1_0"
- }
- }
-```
-
-### Set up WebHook as destination, events to be delivered in eventgridschema
-Use this destination type to send events to any other module (that hosts an HTTP endpoint) or to any HTTP addressable endpoint on the network/internet.
-
-```json
-{
- "properties":
- {
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<webhook_url>",
- "eventDeliverySchema": "eventgridschema",
- }
- }
- }
-}
-```
-
-Constraints on the `endpointUrl` attribute:
-- It must be non-null.-- It must be an absolute URL.-- If outbound__webhook__httpsOnly is set to true in the EventGridModule settings, it must be HTTPS only.-- If outbound__webhook__httpsOnly set to false, it can be HTTP or HTTPS.-
-Constraints on the `eventDeliverySchema` property:
-- It must match the subscribing topic's input schema.-- It can be null. It defaults to the topic's input schema.-
-### Set up IoT Edge as destination
-
-Use this destination to send events to IoT Edge Hub and be subjected to edge hub's routing/filtering/forwarding subsystem.
-
-```json
-{
- "properties":
- {
- "destination":
- {
- "endpointType": "EdgeHub",
- "properties":
- {
- "outputName": "<eventgridmodule_output_port_name>"
- }
- }
- }
-}
-```
-
-### Set up Event Grid Cloud as destination
-
-Use this destination to send events to Event Grid in the cloud (Azure). You'll need to first set up a user topic in the cloud to which events should be sent to, before creating an event subscription on the edge.
-
-```json
-{
- "properties":
- {
- "destination":
- {
- "endpointType": "EventGrid",
- "properties":
- {
- "endpointUrl": "<eventgrid_user_topic_url>",
- "sasKey": "<user_topic_sas_key>",
- "topicName": "<new value to populate in forwarded EventGridEvent.Topic>" // if not specified, the Topic field on every event gets nulled out before being sent to Azure Event Grid
- }
- }
- }
-}
-```
-
-EndpointUrl:
-- It must be non-null.-- It must be an absolute URL.-- The path `/api/events` must be defined in the request URL path.-- It must have `api-version=2018-01-01` in the query string.-- If outbound__eventgrid__httpsOnly is set to true in the EventGridModule settings (true by default), it must be HTTPS only.-- If outbound__eventgrid__httpsOnly is set to false, it can be HTTP or HTTPS.-- If outbound__eventgrid__allowInvalidHostnames is set to false (false by default), it must target one of the following endpoints:
- - `eventgrid.azure.net`
- - `eventgrid.azure.us`
- - `eventgrid.azure.cn`
-
-SasKey:
-- Must be non-null.-
-TopicName:
-- If the Subscription.EventDeliverySchema is set to EventGridSchema, the value from this field is put into every event's Topic field before being forwarded to Event Grid in the cloud.-- If the Subscription.EventDeliverySchema is set to CustomEventSchema, this property is ignored and the custom event payload is forwarded exactly as it was received.-
-## Set up Event Hubs as a destination
-
-To publish to an Event Hub, set the `endpointType` to `eventHub` and provide:
-
-* connectionString: Connection string for the specific Event Hub you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. You can generate an entity specific connection string by navigating to the specific Event Hub you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventHub",
- "properties": {
- "connectionString": "<your-event-hub-connection-string>"
- }
- }
- }
- }
- ```
-
-## Set up Service Bus Queues as a destination
-
-To publish to a Service Bus Queue, set the `endpointType` to `serviceBusQueue` and provide:
-
-* connectionString: Connection string for the specific Service Bus Queue you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. Generate an entity specific connection string by navigating to the specific Service Bus Queue you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "serviceBusQueue",
- "properties": {
- "connectionString": "<your-service-bus-queue-connection-string>"
- }
- }
- }
- }
- ```
-
-## Set up Service Bus Topics as a destination
-
-To publish to a Service Bus Topic, set the `endpointType` to `serviceBusTopic` and provide:
-
-* connectionString: Connection string for the specific Service Bus Topic you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. Generate an entity specific connection string by navigating to the specific Service Bus Topic you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "serviceBusTopic",
- "properties": {
- "connectionString": "<your-service-bus-topic-connection-string>"
- }
- }
- }
- }
- ```
-
-## Set up Storage Queues as a destination
-
-To publish to a Storage Queue, set the `endpointType` to `storageQueue` and provide:
-
-* queueName: Name of the Storage Queue you're publishing to.
-* connectionString: Connection string for the Storage Account the Storage Queue is in.
-
- >[!NOTE]
- > Unlike Event Hubs, Service Bus Queues, and Service Bus Topics, the connection string used for Storage Queues is not entity specific. Instead, it must be the connection string for the Storage Account.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "storageQueue",
- "properties": {
- "queueName": "<your-storage-queue-name>",
- "connectionString": "<your-storage-account-connection-string>"
- }
- }
- }
- }
- ```
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/concepts.md
- Title: Concepts - Azure Event Grid IoT Edge | Microsoft Docs
-description: Concepts in Event Grid on IoT Edge.
---- Previously updated : 02/15/2022----
-# Event Grid concepts
-
-This article describes the main concepts in Azure Event Grid.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-## Events
-
-An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. Every event also has specific information that is only relevant to the specific type of event. The support for an event of size up to 1 MB is currently in preview.
-
-For the properties that are included in an event, see [Azure Event Grid event schema](event-schemas.md).
-
-## Publishers
-
-A publisher is the user or organization that decides to send events to Event Grid. You can publish events from your own application.
-
-## Event sources
-
-An event source is where the event happens. Each event source is related to one or more event types. For example, Azure Storage is the event source for blob created events. Your application is the event source for custom events that you define. Event sources are responsible for sending events to Event Grid.
-
-## Topics
-
-The event grid topic provides an endpoint where the source sends events. The publisher creates the event grid topic, and decides whether an event source needs one topic or more than one topic. A topic is used for a collection of related events. To respond to certain types of events, subscribers decide which topics to subscribe to.
-
-When designing your application, you have the flexibility to decide on how many topics to create. For large solutions, create a custom topic for each category of related events. For example, consider an application that sends events related to modifying user accounts and processing orders. It's unlikely any event handler wants both categories of events. Create two custom topics and let event handlers subscribe to the one that interests them. For small solutions, you might prefer to send all events to a single topic. Event subscribers can filter for the event types they want.
-
-See [REST API documentation](api.md) on how to manage topics in Event Grid.
-
-## Event subscriptions
-
-A subscription tells Event Grid which events on a topic you're interested in receiving. When creating the subscription, you provide an endpoint for handling the event. You can filter the events that are sent to the endpoint.
-
-See [REST API documentation](api.md) on how to manage subscriptions in Event Grid.
-
-## Event handlers
-
-From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes further action to process the event. Event Grid supports several handler types. You can use a supported Azure service or your own web hook as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. If the destination event handler is an HTTP web hook, the event is retried until the handler returns a status code of `200 ΓÇô OK`. For edge Hub, if the event is delivered without any exception, it is considered successful.
-
-## Security
-
-Event Grid provides security for subscribing to topics, and publishing topics. For more information, see [Event Grid security and authentication](security-authentication.md).
-
-## Event delivery
-
-If Event Grid can't confirm that an event has been received by the subscriber's endpoint, it redelivers the event. For more information, see [Event Grid message delivery and retry](delivery-retry.md).
-
-## Batching
-
-When using a custom topic, events must always be published in an array. For low throughput scenarios, the array will have only one value. For high volume use cases, we recommend that you batch several events together per publish to achieve higher efficiency. Batches can be up to 1 MB. Each event should still not be greater than 1 MB (preview).
event-grid Configure Api Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-api-protocol.md
- Title: Configure API protocols - Azure Event Grid IoT Edge | Microsoft Docs
-description: Learn about the possible protocol configurations of an Event Grid module.
---- Previously updated : 02/15/2022----
-# Configure Event Grid API protocols
-
-This guide gives examples of the possible protocol configurations of an Event Grid module. The Event Grid module exposes API for its management and runtime operations. The following table captures the protocols and ports.
-
-| Protocol | Port | Description |
-| - | | |
-| HTTP | 5888 | Turned off by default. Useful only during testing. Not suitable for production workloads.
-| HTTPS | 4438 | Default
-
-See [Security and authentication](security-authentication.md) guide for all the possible configurations.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-## Expose HTTPS to IoT Modules on the same edge network
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge"
- ]
-}
- ```
-
-## Enable HTTPS to other IoT modules and non-IoT workloads
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
-}
- ```
-
->[!NOTE]
-> The **PortBindings** section allows you to map internal ports to ports of the container host. This feature makes it possible to reach the Event Grid module from outside the IoT Edge container network, if the IoT edge device is reachable publicly.
-
-## Expose HTTP and HTTPS to IoT modules on the same edge network
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=enabled",
- "inbound__serverAuth__serverCert__source=IoTEdge"
- ]
-}
- ```
-
-## Enable HTTP and HTTPS to other IoT modules and non-IoT workloads
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=enabled",
- "inbound__serverAuth__serverCert__source=IoTEdge"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ],
- "5888/tcp": [
- {
- "HostPort": "5888"
- }
- ]
- }
- }
-}
- ```
-
->[!NOTE]
-> By default, every IoT Module is part of the IoT Edge runtime created by the bridge network. It enables different IoT modules on the same network to communicate with each other. **PortBindings** allows you to map a container internal port onto the host machine thereby allowing anyone to be able to access Event Grid module's port from outside.
-
->[!IMPORTANT]
-> While the ports can be made accessible outside the IoT Edge network, client authentication enforces who is actually allowed to make calls into the module.
event-grid Configure Client Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-client-auth.md
- Title: Configure client authentication of incoming calls - Azure Event Grid IoT Edge | Microsoft Docs
-description: Learn about the possible client authentication configurations for the Event Grid module.
---- Previously updated : 02/15/2022----
-# Configure client authentication of incoming calls
-
-This guide gives examples of the possible client authentication configurations for the Event Grid module. The Event Grid module supports two types of client authentication:
-
-* Shared access signature (SAS) key-based
-* Certificate-based
-
-See [Security and authentication](security-authentication.md) guide for all the possible configurations.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Enable certificate-based client authentication, no self-signed certificates
-
-```json
- {
- "Env": [
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=false"
- ]
-}
- ```
-
-## Enable certificate-based client authentication, allow self-signed certificates
-
-```json
- {
- "Env": [
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true"
- ]
-}
-```
-
->[!NOTE]
->Set the property **inbound__clientAuth__clientCert__allowUnknownCA** to **true** only in test environments as you might typically use self-signed certificates. For production workloads, we recommend that you set this property to **false** and certificates from a certificate authority (CA).
-
-## Enable certificate-based and sas-key based client authentication
-
-```json
- {
- "Env": [
- "inbound__clientAuth__sasKeys__enabled=true",
- "inbound__clientAuth__sasKeys__key1=<some-secret1-here>",
- "inbound__clientAuth__sasKeys__key2=<some-secret2-here>",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true"
- ]
-}
- ```
-
->[!NOTE]
->SAS key-based client authentication allows a non-IoT edge module to do management and runtime operations assuming of course the API ports are accessible outside the IoT Edge network.
event-grid Configure Identity Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-identity-auth.md
- Title: Configure identity - Azure Event Grid IoT Edge | Microsoft Docs
-description: Configure Event Grid module's identity
----- Previously updated : 02/15/2022---
-# Configure identity for the Event Grid module
-
-This article gives shows how to configure identity for Grid on Edge. By default, the Event Grid module presents its identity certificate as configured by the IoT security daemon. Event Grid on Edge presents its identity certificate with its outgoing calls when it delivers events. A subscriber can then validate it's the Event Grid module that sent the event before accepting.
-
-See [Security and authentication](security-authentication.md) guide for all the possible configurations.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Always present identity certificate
-Here's an example configuration for always presenting an identity certificate on outgoing calls.
-
-```json
- {
- "Env": [
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge"
- ]
-}
- ```
-
-## Don't present identity certificate
-Here's an example configuration for not presenting an identity certificate on outgoing calls.
-
-```json
- {
- "Env": [
- "outbound__clientAuth__clientCert__enabled=false"
- ]
-}
- ```
event-grid Configure Webhook Subscriber Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-webhook-subscriber-auth.md
- Title: Configure webhook subscriber authentication - Azure Event Grid IoT Edge | Microsoft Docs
-description: Configure webhook subscriber authentication
----- Previously updated : 02/15/2022---
-# Configure webhook subscriber authentication
-
-This guide gives examples of the possible webhook subscriber configurations for an Event Grid module. By default, only HTTPS endpoints are accepted for webhook subscribers. The Event Grid module will reject if the subscriber presents a self-signed certificate.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Allow only HTTPS subscriber
-
-```json
- {
- "Env": [
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=false"
- ]
-}
- ```
-
-## Allow HTTPS subscriber with self-signed certificate
-
-```json
- {
- "Env": [
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ]
-}
- ```
-
->[!NOTE]
->Set the property `outbound__webhook__allowUnknownCA` to `true` only in test environments as you might typically use self-signed certificates. For production workloads we recommend them to be set to **false**.
-
-## Allow HTTPS subscriber but skip certificate validation
-
-```json
- {
- "Env": [
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=true",
- "outbound__webhook__allowUnknownCA=false"
- ]
-}
- ```
-
->[!NOTE]
->Set the property `outbound__webhook__skipServerCertValidation` to `true` only in test environments as you might not be presenting a certificate that needs to be authenticated. For production workloads we recommend them to be set to **false**
-
-## Allow both HTTP and HTTPS with self-signed certificates
-
-```json
- {
- "Env": [
- "outbound__webhook__httpsOnly=false",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ]
-}
- ```
-
->[!NOTE]
->Set the property `outbound__webhook__httpsOnly` to `false` only in test environments as you might want to bring up a HTTP subscriber first. For production workloads we recommend them to be set to **true**
event-grid Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure.md
- Title: Configuration - Azure Event Grid IoT Edge | Microsoft Docs
-description: Configuration in Event Grid on IoT Edge.
----- Previously updated : 02/15/2022---
-# Event Grid Configuration
-
-Event Grid provides many configurations that can be modified per environment. The following section is a reference to all the available options and their defaults.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## TLS configuration
-
-To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples of its usage can be found in [this article](configure-api-protocol.md).
-
-| Property Name | Description |
-| - | |
-|`inbound__serverAuth__tlsPolicy`| TLS Policy of the Event Grid module. Default value is HTTPS only.
-|`inbound__serverAuth__serverCert__source`| Source of server certificate used by the Event Grid Module for its TLS configuration. Default value is IoT Edge.
-
-## Incoming client authentication
-
-To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples can be found in [this article](configure-client-auth.md).
-
-| Property Name | Description |
-| - | |
-|`inbound__clientAuth__clientCert__enabled`| To turn on/off certificate-based client authentication. Default value is true.
-|`inbound__clientAuth__clientCert__source`| Source for validating client certificates. Default value is IoT Edge.
-|`inbound__clientAuth__clientCert__allowUnknownCA`| Policy to allow a self-signed client certificate. Default value is true.
-|`inbound__clientAuth__sasKeys__enabled`| To turn on/off SAS key based client authentication. Default value is off.
-|`inbound__clientAuth__sasKeys__key1`| One of the values to validate incoming requests.
-|`inbound__clientAuth__sasKeys__key2`| Optional second value to validate incoming requests.
-
-## Outgoing client authentication
-To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples can be found in [this article](configure-identity-auth.md).
-
-| Property Name | Description |
-| - | |
-|`outbound__clientAuth__clientCert__enabled`| To turn on/off attaching an identity certificate for outgoing requests. Default value is true.
-|`outbound__clientAuth__clientCert__source`| Source for retrieving Event Grid module's outgoing certificate. Default value is IoT Edge.
-
-## Webhook event handlers
-
-To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples can be found in [this article](configure-webhook-subscriber-auth.md).
-
-| Property Name | Description |
-| - | |
-|`outbound__webhook__httpsOnly`| Policy to control whether only HTTPS subscribers will be allowed. Default value is true (only HTTPS).
-|`outbound__webhook__skipServerCertValidation`| Flag to control whether to validate the subscriber's certificate. Default value is true.
-|`outbound__webhook__allowUnknownCA`| Policy to control whether a self-signed certificate can be presented by a subscriber. Default value is true.
-
-## Delivery and retry
-
-To learn about this feature in general, see [Delivery and Retry](delivery-retry.md).
-
-| Property Name | Description |
-| - | |
-| `broker__defaultMaxDeliveryAttempts` | Maximum number of attempts to deliver an event. Default value is 30.
-| `broker__defaultEventTimeToLiveInSeconds` | Time-to-live (TTL) in seconds after which an event will be dropped if not delivered. Default value is **7200** seconds
-
-## Output batching
-
-To learn about this feature in general, see [Delivery and Output batching](delivery-output-batching.md).
-
-| Property Name | Description |
-| - | |
-| `api__deliveryPolicyLimits__maxBatchSizeInBytes` | Maximum value allowed for the `ApproxBatchSizeInBytes` knob. Default value is `1_058_576`.
-| `api__deliveryPolicyLimits__maxEventsPerBatch` | Maximum value allowed for the `MaxEventsPerBatch` knob. Default value is `50`.
-| `broker__defaultMaxBatchSizeInBytes` | Maximum delivery request size when only `MaxEventsPerBatch` is specified. Default value is `1_058_576`.
-| `broker__defaultMaxEventsPerBatch` | Maximum number of events to add to a batch when only `MaxBatchSizeInBytes` is specified. Default value is `10`.
-
-## Metrics
-
-To learn about using metrics with Event Grid on IoT Edge, see [monitor topics and subscriptions](monitor-topics-subscriptions.md)
-
-| Property Name | Description |
-| - | |
-| `metrics__reporterType` | Reporter type for metrics endpoint. Default is `none` and disables metrics. Setting to `prometheus` enables metrics in the Prometheus exposition format.
event-grid Delivery Output Batching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/delivery-output-batching.md
- Title: Output batching in Azure Event Grid IoT Edge | Microsoft Docs
-description: Output batching in Event Grid on IoT Edge.
--- Previously updated : 02/15/2022---
-# Output batching
-
-Event Grid has support to deliver more than one event in a single delivery request. This feature makes it possible to increase the overall delivery throughput without paying the HTTP per-request overheads. Batching is turned off by default and can be turned on per-subscription.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-> [!WARNING]
-> The maximum allowed duration to process each delivery request does not change, even though the subscriber code potentially has to do more work per batched request. Delivery timeout defaults to 60 seconds.
-
-## Batching policy
-
-Event Grid's batching behavior can be customized per subscriber, by tweaking the following two settings:
-
-* Maximum events per batch
-
- This setting sets an upper limit on the number of events that can be added to a batched delivery request.
-
-* Preferred Batch Size In Kilobytes
-
- This knob is used to further control the max number of kilobytes that can be sent over per delivery request
-
-## Batching behavior
-
-* All or none
-
- Event Grid operates with all-or-none semantics. It doesn't support partial success of a batch delivery. Subscribers should be careful to only ask for as many events per batch as they can reasonably handle in 60 seconds.
-
-* Optimistic batching
-
- The batching policy settings aren't strict bounds on the batching behavior, and are respected on a best-effort basis. At low event rates, you'll often observe the batch size being less than the requested maximum events per batch.
-
-* Default is set to OFF
-
- By default, Event Grid only adds one event to each delivery request. The way to turn on batching is to set either one of the settings mentioned earlier in the article in the event subscription JSON.
-
-* Default values
-
- It isn't necessary to specify both the settings (Maximum events per batch and Approximate batch size in kilo bytes) when creating an event subscription. If only one setting is set, Event Grid uses (configurable) default values. See the following sections for the default values, and how to override them.
-
-## Turn on output batching
-
-```json
-{
- "properties":
- {
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<your_webhook_url>",
- "maxEventsPerBatch": 10,
- "preferredBatchSizeInKilobytes": 64
- }
- },
- }
-}
-```
-
-## Configuring maximum allowed values
-
-The following deployment time settings control the maximum value allowed when creating an event subscription.
-
-| Property Name | Description |
-| - | -- |
-| `api__deliveryPolicyLimits__maxpreferredBatchSizeInKilobytes` | Maximum value allowed for the `PreferredBatchSizeInKilobytes` knob. Default `1033`.
-| `api__deliveryPolicyLimits__maxEventsPerBatch` | Maximum value allowed for the `MaxEventsPerBatch` knob. Default `50`.
-
-## Configuring runtime default values
-
-The following deployment time settings control the runtime default value of each knob when it isn't specified in the Event Subscription. To reiterate, at least one knob must be set on the Event Subscription to turn on batching behavior.
-
-| Property Name | Description |
-| - | -- |
-| `broker__defaultMaxBatchSizeInBytes` | Maximum delivery request size when only `MaxEventsPerBatch` is specified. Default `1_058_576`.
-| `broker__defaultMaxEventsPerBatch` | Maximum number of events to add to a batch when only `MaxBatchSizeInBytes` is specified. Default `10`.
event-grid Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/delivery-retry.md
- Title: Delivery and retry - Azure Event Grid IoT Edge | Microsoft Docs
-description: Delivery and retry in Event Grid on IoT Edge.
----- Previously updated : 02/15/2022---
-# Delivery and retry
-
-Event Grid provides durable delivery. It tries to deliver each message at least once for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there is a failure, Event Grid retries delivery based on a fixed **retry schedule** and **retry policy**. By default, the Event Grid module delivers one event at a time to the subscriber. The payload is however an array with a single event. You can have the module deliver more than one event at a time by enabling the output batching feature. For details about this feature, see [output batching](delivery-output-batching.md).
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-> [!IMPORTANT]
->There is no persistence support for event data. This means redeploying or restart of the Event Grid module will cause you to lose any events that aren't yet delivered.
-
-## Retry schedule
-
-Event Grid waits up to 60 seconds for a response after delivering a message. If the subscriber's endpoint doesn't ACK the response, then the message will be enqueued in one of our back off queues for subsequent retries.
-
-There are two pre-configured back off queues that determine the schedule on which a retry will be attempted. They are:
-
-| Schedule | Description |
-| | |
-| 1 minute | Messages that end up here are attempted every minute.
-| 10 minutes | Messages that end up here are attempted every 10th minute.
-
-### How it works
-
-1. Message arrives into the Event Grid module. Attempt is made to deliver it immediately.
-1. If delivery fails, then the message is enqueued into 1-minute queue and retried after a minute.
-1. If delivery continues to fail, then the message is enqueued into 10-minute queue and retried every 10 minutes.
-1. Deliveries are attempted until successful or retry policy limits are reached.
-
-## Retry policy limits
-
-There are two configurations that determine retry policy. They are:
-
-* Maximum number of attempts
-* Event time-to-live (TTL)
-
-An event will be dropped if either of the limits of the retry policy is reached. The retry schedule itself was described in the Retry Schedule section. Configuration of these limits can be done either for all subscribers or per subscription basis. The following section describes each one is further detail.
-
-## Configuring defaults for all subscribers
-
-There are two properties: `brokers__defaultMaxDeliveryAttempts` and `broker__defaultEventTimeToLiveInSeconds` that can be configured as part of the Event Grid deployment, which controls retry policy defaults for all subscribers.
-
-| Property Name | Description |
-| - | |
-| `broker__defaultMaxDeliveryAttempts` | Maximum number of attempts to deliver an event. Default value: 30.
-| `broker__defaultEventTimeToLiveInSeconds` | Event TTL in seconds after which an event will be dropped if not delivered. Default value: **7200** seconds
-
-## Configuring defaults per subscriber
-
-You can also specify retry policy limits on a per subscription basis.
-See our [API documentation](api.md) for information on how to do configure defaults per subscriber. Subscription level defaults override the module level configurations.
-
-## Examples
-
-The following example sets up retry policy in the Event Grid module with maxNumberOfAttempts = 3 and Event TTL of 30 minutes
-
-```json
-{
- "Env": [
- "broker__defaultMaxDeliveryAttempts=3",
- "broker__defaultEventTimeToLiveInSeconds=1800"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
-}
-```
-
-The following example sets up a Web hook subscription with maxNumberOfAttempts = 3 and Event TTL of 30 minutes
-
-```json
-{
- "properties": {
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your_webhook_url>",
- "eventDeliverySchema": "eventgridschema"
- }
- },
- "retryPolicy": {
- "eventExpiryInMinutes": 30,
- "maxDeliveryAttempts": 3
- }
- }
-}
-```
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/event-handlers.md
- Title: Event Handlers and destinations - Azure Event Grid IoT Edge | Microsoft Docs
-description: Event Handlers and destinations in Event Grid on Edge
- Previously updated : 02/15/2022---
-# Event Handlers and destinations in Event Grid on Edge
-
-An event handler is the place where the event for further action or to process the event. With the Event Grid on Edge module, the event handler can be on the same edge device, another device, or in the cloud. You may can use any WebHook to handle events, or send events to one of the native handlers like Azure Event Grid.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-This article provides information on how to configure each.
-
-## WebHook
-
-To publish to a WebHook endpoint, set the `endpointType` to `WebHook` and provide:
-
-* endpointUrl: The WebHook endpoint URL
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-webhook-endpoint>"
- }
- }
- }
- }
- ```
-
-## Azure Event Grid
-
-To publish to an Azure Event Grid cloud endpoint, set the `endpointType` to `eventGrid` and provide:
-
-* endpointUrl: Event Grid Topic URL in the cloud
-* sasKey: Event Grid Topic's SAS key
-* topicName: Name to stamp all outgoing events to Event Grid. Topic name is useful when posting to an Event Grid Domain topic.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventGrid",
- "properties": {
- "endpointUrl": "<your-event-grid-cloud-topic-endpoint-url>?api-version=2018-01-01",
- "sasKey": "<your-event-grid-topic-saskey>",
- "topicName": null
- }
- }
- }
- }
- ```
-
-## IoT Edge Hub
-
-To publish to an Edge Hub module, set the `endpointType` to `edgeHub` and provide:
-
-* outputName: The output on which the Event Grid module will route events that match this subscription to edgeHub. For example, events that match the below subscription will be written to /messages/modules/eventgridmodule/outputs/sampleSub4.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "edgeHub",
- "properties": {
- "outputName": "sampleSub4"
- }
- }
- }
- }
- ```
-
-## Event Hubs
-
-To publish to an Event Hub, set the `endpointType` to `eventHub` and provide:
-
-* connectionString: Connection string for the specific Event Hub you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. You can generate an entity specific connection string by navigating to the specific Event Hub you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventHub",
- "properties": {
- "connectionString": "<your-event-hub-connection-string>"
- }
- }
- }
- }
- ```
-
-## Service Bus Queues
-
-To publish to a Service Bus Queue, set the `endpointType` to `serviceBusQueue` and provide:
-
-* connectionString: Connection string for the specific Service Bus Queue you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. Generate an entity specific connection string by navigating to the specific Service Bus Queue you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "serviceBusQueue",
- "properties": {
- "connectionString": "<your-service-bus-queue-connection-string>"
- }
- }
- }
- }
- ```
-
-## Service Bus Topics
-
-To publish to a Service Bus Topic, set the `endpointType` to `serviceBusTopic` and provide:
-
-* connectionString: Connection string for the specific Service Bus Topic you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. Generate an entity specific connection string by navigating to the specific Service Bus Topic you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "serviceBusTopic",
- "properties": {
- "connectionString": "<your-service-bus-topic-connection-string>"
- }
- }
- }
- }
- ```
-
-## Storage Queues
-
-To publish to a Storage Queue, set the `endpointType` to `storageQueue` and provide:
-
-* queueName: Name of the Storage Queue you're publishing to.
-* connectionString: Connection string for the Storage Account the Storage Queue is in.
-
- >[!NOTE]
- > Unline Event Hubs, Service Bus Queues, and Service Bus Topics, the connection string used for Storage Queues is not entity specific. Instead, it must but the connection string for the Storage Account.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "storageQueue",
- "properties": {
- "queueName": "<your-storage-queue-name>",
- "connectionString": "<your-storage-account-connection-string>"
- }
- }
- }
- }
- ```
event-grid Event Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/event-schemas.md
- Title: Event schemas ΓÇö Azure Event Grid IoT Edge | Microsoft Docs
-description: Event schemas in Event Grid on IoT Edge.
----- Previously updated : 02/15/2022---
-# Event schemas
-
-Event Grid module accepts and delivers events in JSON format. There are currently three schemas that are supported by Event Grid: -
-
-* **EventGridSchema**
-* **CustomSchema**
-* **CloudEventSchema**
-
-You can configure the schema that a publisher must conform to during topic creation. If unspecified, it defaults to **EventGridSchema**. Events that don't conform to the expected schema will be rejected.
-
-Subscribers can also configure the schema in which they want the events delivered. If unspecified, default is topic's schema.
-Currently subscriber delivery schema has to match its topic's input schema.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## EventGrid schema
-
-EventGrid schema consists of a set of required properties that a publishing entity must conform to. Each publisher has to populate the top-level fields.
-
-```json
-[
- {
- "topic": string,
- "subject": string,
- "id": string,
- "eventType": string,
- "eventTime": string,
- "data":{
- object-unique-to-each-publisher
- },
- "dataVersion": string,
- "metadataVersion": string
- }
-]
-```
-
-### EventGrid schema properties
-
-All events have the following top-level data:
-
-| Property | Type | Required | Description |
-| -- | - | -- |--
-| topic | string | No | Should match the topic on which it's published. Event Grid populates it with the name of the topic on which it's published if unspecified. |
-| subject | string | Yes | Publisher-defined path to the event subject. |
-| eventType | string | Yes | Event type for this event source, for example, BlobCreated. |
-| eventTime | string | Yes | The time the event is generated based on the provider's UTC time. |
-| id | string | No | Unique identifier for the event. |
-| data | object | No | Used to capture event data that's specific to the publishing entity. |
-| dataVersion | string | Yes | The schema version of the data object. The publisher defines the schema version. |
-| metadataVersion | string | No | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
-
-### Example ΓÇö EventGrid schema event
-
-```json
-[
- {
- "id": "1807",
- "eventType": "recordInserted",
- "subject": "myapp/vehicles/motorcycles",
- "eventTime": "2017-08-10T21:03:07+00:00",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- },
- "dataVersion": "1.0"
- }
-]
-```
-
-## CustomEvent schema
-
-In custom schema, there are no mandatory properties that are enforced like the EventGrid schema. Publishing entity can control the event schema entirely. It provides maximum flexibility and enables scenarios where you have an event-based system already in place and would like to reuse existing events and/or don't want to be tied down to a specific schema.
-
-### Custom schema properties
-
-No mandatory properties. It's up to the publishing entity to determine the payload.
-
-### Example ΓÇö Custom Schema Event
-
-```json
-[
- {
- "eventdata": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
-]
-```
-
-## CloudEvent schema
-
-In addition to the above schemas, Event Grid natively supports events in the [CloudEvents JSON schema](https://github.com/cloudevents/spec/blob/main/cloudevents/formats/json-format.md). CloudEvents is an open specification for describing event data. It simplifies interoperability by providing a common event schema for publishing, and consuming events. It is part of [CNCF](https://www.cncf.io/) and currently available version is 1.0-rc1.
-
-### CloudEvent schema properties
-
-Refer to [CloudEvents specification](https://github.com/cloudevents/spec/blob/main/cloudevents/formats/json-format.md#3-envelope) on the mandatory envelope properties.
-
-### Example ΓÇö cloud event
-```json
-[{
- "id": "1807",
- "type": "recordInserted",
- "source": "myapp/vehicles/motorcycles",
- "time": "2017-08-10T21:03:07+00:00",
- "datacontenttype": "application/json",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- },
- "dataVersion": "1.0",
- "specVersion": "1.0-rc1"
-}]
-```
event-grid Forward Events Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/forward-events-cloud.md
- Title: Forward edge events to Event Grid cloud - Azure Event Grid IoT Edge | Microsoft Docs
-description: Forward edge events to Event Grid cloud
----- Previously updated : 02/15/2022---
-# Tutorial: Forward events to Event Grid cloud
-
-This article walks through all the steps needed to forward edge events to Event Grid in the Azure cloud. You might want to do it for the following reasons:
-
-* React to edge events in the cloud.
-* Forward events to Event Grid in the cloud and use Azure Event Hubs or Azure Storage queues to buffer events before processing them in the cloud.
-
- To complete this tutorial, you need to have an understanding of Event Grid concepts on [edge](concepts.md) and [Azure](../concepts.md). For more destination types, see [event handlers](event-handlers.md).
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Prerequisites
-In order to complete this tutorial, you need:
-
-* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one.
-* **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one.
-
-## Create Event Grid topic and subscription in cloud
-
-Create an Event Grid topic and subscription in the cloud by following [this tutorial](../custom-event-quickstart-portal.md). Note down `topicURL`, `sasKey`, and `topicName` of the newly created topic that you use later in the tutorial.
-
-For example, if you created a topic named `testegcloudtopic` in West US, the values would look something like:
-
-* **TopicUrl**: `https://testegcloudtopic.westus2-1.eventgrid.azure.net/api/events`
-* **TopicName**: `testegcloudtopic`
-* **SasKey**: Available under **AccessKey** of your topic. Use **key1**.
-
-## Create Event Grid topic at the edge
-
-1. Create topic3.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "name": "sampleTopic3",
- "properties": {
- "inputschema": "eventGridSchema"
- }
- }
- ```
-1. Run the following command to create the topic. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @topic3.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3?api-version=2019-01-01-preview
- ```
-1. Run the following command to verify topic was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- [
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic3",
- "name": "sampleTopic3",
- "type": "Microsoft.EventGrid/topics",
- "properties": {
- "endpoint": "https://<edge-vm-ip>:4438/topics/sampleTopic3/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema"
- }
- }
- ]
- ```
-
-## Create Event Grid subscription at the edge
--
-1. Create subscription3.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventGrid",
- "properties": {
- "endpointUrl": "<your-event-grid-cloud-topic-endpoint-url>?api-version=2018-01-01",
- "sasKey": "<your-event-grid-topic-saskey>",
- "topicName": null
- }
- }
- }
- }
- ```
-
- >[!NOTE]
- > The **endpointUrl** specifies that the Event Grid topic URL in the cloud. The **sasKey** refers to Event Grid cloud topic's key. The value in **topicName** will be used to stamp all outgoing events to Event Grid. This can be useful when posting to an Event Grid domain topic. For more information about Event Grid domain topic, see [Event domains](../event-domains.md)
-
- For example,
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventGrid",
- "properties": {
- "endpointUrl": "https://testegcloudtopic.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01",
- "sasKey": "<your-event-grid-topic-saskey>",
- "topicName": null
- }
- }
- }
- }
- ```
-
-2. Run the following command to create the subscription. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @subscription3.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3/eventSubscriptions/sampleSubscription3?api-version=2019-01-01-preview
- ```
-
-3. Run the following command to verify subscription was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3/eventSubscriptions/sampleSubscription3?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic3/eventSubscriptions/sampleSubscription3",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "name": "sampleSubscription3",
- "properties": {
- "Topic": "sampleTopic3",
- "destination": {
- "endpointType": "eventGrid",
- "properties": {
- "endpointUrl": "https://testegcloudtopic.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01",
- "sasKey": "<your-event-grid-topic-saskey>",
- "topicName": null
- }
- }
- }
- }
- ```
-
-## Publish an event at the edge
-
-1. Create event3.json with the following content. See [API documentation](api.md) for details about the payload.
-
- ```json
- [
- {
- "id": "eventId-egcloud-0",
- "eventType": "recordInserted",
- "subject": "myapp/vehicles/motorcycles",
- "eventTime": "2019-07-28T21:03:07+00:00",
- "dataVersion": "1.0",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
- ]
- ```
-
-1. Run the following command:
-
- ```sh
- curl -k -H "Content-Type: application/json" -X POST -g -d @event3.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3/events?api-version=2019-01-01-preview
- ```
-
-## Verify edge event in cloud
-
-For information on viewing events delivered by the cloud topic, see the [tutorial](../custom-event-quickstart-portal.md).
-
-## Cleanup resources
-
-* Run the following command to delete the topic and all its subscriptions
-
- ```sh
- curl -k -H "Content-Type: application/json" -X DELETE https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3?api-version=2019-01-01-preview
- ```
-
-* Delete topic and subscriptions created in the cloud (Azure Event Grid) as well.
-
-## Next steps
-
-In this tutorial, you published an event on the edge and forwarded to Event Grid in the Azure cloud. Now that you know the basic steps to forward to Event Grid in cloud:
-
-* To troubleshoot issues with using Azure Event Grid on IoT Edge, see [Troubleshooting guide](troubleshoot.md).
-* Forward events to IoTHub by following this [tutorial](forward-events-iothub.md)
-* Forward events to Webhook in the cloud by following this [tutorial](pub-sub-events-webhook-cloud.md)
-* [Monitor topics and subscriptions on the edge](monitor-topics-subscriptions.md)
event-grid Forward Events Iothub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/forward-events-iothub.md
- Title: Forward Event Grid events to IoTHub - Azure Event Grid IoT Edge | Microsoft Docs
-description: Forward Event Grid events to IoTHub
----- Previously updated : 02/15/2022---
-# Tutorial: Forward events to IoTHub
-
-This article walks through all the steps needed to forward Event Grid events to other IoT Edge modules, IoTHub using routes. You might want to do it for the following reasons:
-
-* Continue to use any existing investments already in place with edgeHub's routing
-* Prefer to route all events from a device only via IoT Hub
--
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-To complete this tutorial, you need to understand the following concepts:
--- [Event Grid Concepts](concepts.md)-- [IoT Edge hub](../../iot-edge/module-composition.md) -
-## Prerequisites
-In order to complete this tutorial, you need:
-
-* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one.
-* **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one.
--
-## Create topic
-
-As a publisher of an event, you need to create an Event Grid topic. The topic refers to an end point where publishers can then send events to.
-
-1. Create topic4.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "name": "sampleTopic4",
- "properties": {
- "inputschema": "eventGridSchema"
- }
- }
- ```
-1. Run the following command to create the topic. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @topic4.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4?api-version=2019-01-01-preview
- ```
-
-1. Run the following command to verify topic was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- [
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic4",
- "name": "sampleTopic4",
- "type": "Microsoft.EventGrid/topics",
- "properties": {
- "endpoint": "https://<edge-vm-ip>:4438/topics/sampleTopic4/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema"
- }
- }
- ]
- ```
-
-## Create event subscription
-
-Subscribers can register for events published to a topic. To receive any event, they need to create an Event Grid subscription on a topic of interest.
--
-1. Create subscription4.json with the below content. Refer to our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "edgeHub",
- "properties": {
- "outputName": "sampleSub4"
- }
- }
- }
- }
- ```
-
- >[!NOTE]
- > The `endpointType` specifies that the subscriber is `edgeHub`. The `outputName` specifies the output on which the Event Grid module will route events that match this subscription to edgeHub. For example, events that match the above subscription will be written to `/messages/modules/eventgridmodule/outputs/sampleSub4`.
-2. Run the following command to create the subscription. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @subscription4.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4/eventSubscriptions/sampleSubscription4?api-version=2019-01-01-preview
- ```
-3. Run the following command to verify subscription was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4/eventSubscriptions/sampleSubscription4?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic4/eventSubscriptions/sampleSubscription4",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "name": "sampleSubscription4",
- "properties": {
- "Topic": "sampleTopic4",
- "destination": {
- "endpointType": "edgeHub",
- "properties": {
- "outputName": "sampleSub4"
- }
- }
- }
- }
- ```
-
-## Set up an edge hub route
-
-Update the edge hub's route to forward event subscription's events to be forwarded to IoTHub as follows:
-
-1. Sign in to the [Azure portal](https://portal.azure.com)
-1. Navigate to the **IoT Hub**.
-1. Select **IoT Edge** from the menu
-1. Select the ID of the target device from the list of devices.
-1. Select **Set Modules**.
-1. Select **Next** and to the routes section.
-1. In the routes, add a new route
-
- ```sh
- "fromEventGridToIoTHub":"FROM /messages/modules/eventgridmodule/outputs/sampleSub4 INTO $upstream"
- ```
-
- For example,
-
- ```json
- {
- "routes": {
- "fromEventGridToIoTHub": "FROM /messages/modules/eventgridmodule/outputs/sampleSub4 INTO $upstream"
- }
- }
- ```
-
- >[!NOTE]
- > The above route will forward any events matched for this subscription to be forwarded to the IoT hub. You can use the [Edge hub routing](../../iot-edge/module-composition.md) features to further filter, and route the Event Grid events to other IoT Edge modules.
-
-## Setup IoT Hub route
-
-See the [IoT Hub routing tutorial](../../iot-hub/tutorial-routing.md) to set up a route from the IoT hub so that you can view events forwarded from the Event Grid module. Use `true` for the query to keep the tutorial simple.
---
-## Publish an event
-
-1. Create event4.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- [
- {
- "id": "eventId-iothub-1",
- "eventType": "recordInserted",
- "subject": "myapp/vehicles/motorcycles",
- "eventTime": "2019-07-28T21:03:07+00:00",
- "dataVersion": "1.0",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
- ]
- ```
-
-1. Run the following command to publish event:
-
- ```sh
- curl -k -H "Content-Type: application/json" -X POST -g -d @event4.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4/events?api-version=2019-01-01-preview
- ```
-
-## Verify event delivery
-
-See the IoT Hub [routing tutorial](../../iot-hub/tutorial-routing.md) for the steps to view the events.
-
-## Cleanup resources
-
-* Run the following command to delete the topic and all its subscriptions at the edge:
-
- ```sh
- curl -k -H "Content-Type: application/json" -X DELETE https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4?api-version=2019-01-01-preview
- ```
-* Delete any resources created while setting up IoTHub routing in the cloud as well.
-
-## Next steps
-
-In this tutorial, you created an Event Grid topic, edge hub subscription, and published events. Now that you know the basic steps to forward to an edge hub, see the following articles:
-
-* To troubleshoot issues with using Azure Event Grid on IoT Edge, see [Troubleshooting guide](troubleshoot.md).
-* Use [edge hub](../../iot-edge/module-composition.md) route filters to partition events
-* Set up persistence of Event Grid module on [linux](persist-state-linux.md) or [Windows](persist-state-windows.md)
-* Follow [documentation](configure-client-auth.md) to configure client authentication
-* Forward events to Azure Event Grid in the cloud by following this [tutorial](forward-events-cloud.md)
-* [Monitor topics and subscriptions on the edge](monitor-topics-subscriptions.md)
event-grid Monitor Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/monitor-topics-subscriptions.md
- Title: Monitor topics and event subscriptions - Azure Event Grid IoT Edge | Microsoft Docs
-description: Monitor topics and event subscriptions
Previously updated : 05/10/2021----
-# Monitor topics and event subscriptions
-
-Event Grid on Edge exposes a number of metrics for topics and event subscriptions in the [Prometheus exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/). This article describes the available metrics and how to enable them.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Enable metrics
-
-Configure the module to emit metrics by setting the `metrics__reporterType` environment variable to `prometheus` in the container create options:
-
- ```json
- {
- "Env": [
- "metrics__reporterType=prometheus"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
-Metrics will be available at `5888/metrics` of the module for http and `4438/metrics` for https. For example, `http://<modulename>:5888/metrics?api-version=2019-01-01-preview` for http. At this point, a metrics module can poll the endpoint to collect metrics as in this [example architecture](https://github.com/veyalla/ehm).
-
-## Available metrics
-
-Both topics and event subscriptions emit metrics to give you insights into event delivery and module performance.
-
-### Topic metrics
-
-| Metric | Description |
-| | -- |
-| EventsReceived | Number of events published to the topic
-| UnmatchedEvents | Number of events published to the topic that do not match an Event Subscription and are dropped
-| SuccessRequests | Number of inbound publish requests received by the topic
-| SystemErrorRequests | Number of inbound publish requests failed due to an internal system error
-| UserErrorRequests | Number on inbound publish requests failed due to user error such as malformed JSON
-| SuccessRequestLatencyMs | Publish request response latency in milliseconds
--
-### Event subscription metrics
-
-| Metric | Description |
-| | -- |
-| DeliverySuccessCounts | Number of events successfully delivered to the configured endpoint
-| DeliveryFailureCounts | Number of events that failed to be delivered to the configured endpoint
-| DeliverySuccessLatencyMs | Latency of events successfully delivered in milliseconds
-| DeliveryFailureLatencyMs | Latency of events delivery failures in milliseconds
-| SystemDelayForFirstAttemptMs | System delay of events before first delivery attempt in milliseconds
-| DeliveryAttemptsCount | Number of event delivery attempts - success and failure
-| ExpiredCounts | Number of events that expired and were not delivered to the configured endpoint
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/overview.md
- Title: Event driven architectures on edge ΓÇö Azure Event Grid on IoT Edge
-description: Use Azure Event Grid as a module on IoT Edge for forward events between modules, edge devices, and the cloud.
- Previously updated : 02/15/2022---
-# What is Azure Event Grid on Azure IoT Edge?
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
-
-Event Grid on IoT Edge brings the power and flexibility of Azure Event Grid to the edge. Create topics, publish events, and subscribe multiple destinations whether they're modules on the same device, other edge devices, or services in the cloud.
-
-As in the cloud, the Event Grid on IoT Edge module handles routing, filtering, and reliable delivery of events at scale. Filter events to ensure that only relevant events are sent to different event handlers using advanced string, numerical, and boolean filters. Retry logic makes sure that the event reaches the target destination even if it's not available at the time of publish. It allows you to use Event Grid on IoT Edge as a powerful store and forward mechanism.
-
-Event Grid on IoT Edge supports both CloudEvents v1.0 and custom event schemas. It also supports the same Pub/Sub semantics as Event Grid in the cloud for easy interoperability.
-
-This article provides an overview of Azure Event Grid on IoT Edge. For step-by-step instructions to use this module on edge, see [Publish, subscribe to events locally](pub-sub-events-webhook-local.md).
-
-![Event Grid on IoT Edge model of sources and handlers](../media/edge-overview/functional-model.png)
-
-This image shows some of the ways you can use Event Grid on IoT Edge, and isn't a comprehensive list of supported functionality.
-
-## When to use Event Grid on IoT Edge
-
-Event Grid on IoT Edge provides an easy to use, reliable eventing model for between the edge and the cloud.
-
-Event Grid on IoT Edge is built with a symmetrical runtime surface area to the Azure cloud service, so you can use the same events and API calls wherever you need. Whether you do pub/sub in the cloud, on the edge, or between the two, Event Grid on IoT Edge can now be your one go-to solution.
-
-Use Event Grid on IoT Edge to trigger simple workflows between modules. For example, create a topic and publish "storage blob created" events from your storage module to the topic. You can now subscribe one or several functions or custom modules to that topics.
-
-Extend your functionality between edge devices. If you're publishing blob module events and want to use the computational power of multiple near by edge devices, create cross-device subscriptions.
-
-Finally, connect to the cloud. If your blob module events are to be periodically synced to the cloud, use the greater compute available on the cloud, or send processed data up, create additional cloud service subscriptions.
-
-Event Grid on IoT Edge provides a flexible and reliable decoupled eventing architecture.
-
-## Event sources
-
-Much like in the cloud, Event Grid on IoT Edge allows direct integration between modules to build event driven architectures. Currently, the events can be sent to Event Grid on IoT Edge from:
-
-* Azure Blob Storage on IoT Edge
-* CloudEvents sources
-* Custom modules & containers via HTTP POST
-
-## Event handlers
-
-Event Grid on IoT Edge is built to send events to anywhere you want. Currently, the following destinations are supported:
-
-* Other modules including IoT Hub, functions, and custom modules
-* Other edge devices
-* WebHooks
-* Azure Event Grid cloud service
-* Event Hubs
-* Service Bus Queues
-* Service Bus Topics
-* Storage Queues
-
-## Supported environments
-Currently, Windows 64-bit, Linux 64-bit, and ARM 32-bit environments are supported.
-
-## Concepts
-
-There are five concepts in Azure Event Grid that let you get started:
-
-* **Events** ΓÇö What happened.
-* **Event sources** ΓÇö Where the event took place.
-* **Topics** ΓÇö The endpoint where publishers send events.
-* **Event subscriptions** ΓÇö The endpoint or built-in mechanism to route events, sometimes to more than one handler. Subscriptions are also used by handlers to intelligently filter incoming events.
-* **Event handlers** ΓÇö The app or service that reacts to the event.
-
-## Cost
-
-Event Grid on IoT Edge is free during public preview.
-
-## Issues
-Report any issues with using Event Grid on IoT Edge at [https://github.com/Azure/event-grid-iot-edge/issues](https://github.com/Azure/event-grid-iot-edge/issues).
-
-## Next steps
-
-* [Publish, subscribe to events locally](pub-sub-events-webhook-local.md)
-* [Publish, subscribe to events in cloud](pub-sub-events-webhook-cloud.md)
-* [Forward events to Event Grid cloud](forward-events-cloud.md)
-* [Forward events to IoTHub](forward-events-iothub.md)
-* [React to Blob Storage events locally](react-blob-storage-events-locally.md)
event-grid Persist State Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/persist-state-linux.md
- Title: Persist state in Linux - Azure Event Grid IoT Edge | Microsoft Docs
-description: Persist metadata in Linux
----- Previously updated : 05/10/2021---
-# Persist state in Linux
-
-Topics and subscriptions created in the Event Grid module are stored in the container file system by default. Without persistence, if the module is redeployed, all the metadata created would be lost. To preserve the data across deployments and restarts, you need to persist the data outside the container file system.
-
-By default only metadata is persisted and events are still stored in-memory for improved performance. Follow the persist events section to enable event persistence as well.
-
-This article provides the steps to deploy the Event Grid module with persistence in Linux deployments.
-
-> [!NOTE]
->The Event Grid module runs as a low-privileged user with UID `2000` and name `eventgriduser`.
-
-## Persistence via volume mount
-
- [Docker volumes](https://docs.docker.com/storage/volumes/) are used to preserve the data across deployments. You can let docker automatically create a named volume as part of deploying the Event Grid module. This option is the simplest option. You can specify the volume name to be created in the **Binds** section as follows:
-
-```json
- {
- "HostConfig": {
- "Binds": [
- "<your-volume-name-here>:/app/metadataDb"
- ]
- }
- }
-```
-
->[!IMPORTANT]
->Do not change the second part of the bind value. It points to a specific location within the module. For the Event Grid module on Linux, it has to be **/app/metadataDb**.
-
-For example, the following configuration will result in the creation of the volume **egmetadataDbVol** where metadata will be persisted.
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "egmetadataDbVol:/app/metadataDb",
- "egdataDbVol:/app/eventsDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
-}
-```
-
-Instead of mounting a volume, you can create a directory on the host system and mount that directory.
-
-## Persistence via host directory mount
-
-Instead of a docker volume, you also have the option to mount a host folder.
-
-1. First create a user with name **eventgriduser** and ID **2000** on the host machine by running the following command:
-
- ```sh
- sudo useradd -u 2000 eventgriduser
- ```
-1. Create a directory on the host file system by running the following command.
-
- ```sh
- md <your-directory-name-here>
- ```
-
- For example, running the following command will create a directory called **myhostdir**.
-
- ```sh
- md /myhostdir
- ```
-1. Next, make **eventgriduser** owner of this folder by running the following command.
-
- ```sh
- sudo chown eventgriduser:eventgriduser -hR <your-directory-name-here>
- ```
-
- For example,
-
- ```sh
- sudo chown eventgriduser:eventgriduser -hR /myhostdir
- ```
-1. Use **Binds** to mount the directory and redeploy the Event Grid module from Azure portal.
-
- ```json
- {
- "HostConfig": {
- "Binds": [
- "<your-directory-name-here>:/app/metadataDb",
- "<your-directory-name-here>:/app/eventsDb",
- ]
- }
- }
- ```
-
- For example,
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "/myhostdir:/app/metadataDb",
- "/myhostdir2:/app/eventsDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
- >[!IMPORTANT]
- >Do not change the second part of the bind value. It points to a specific location within the module. For the Event Grid module on linux, it has to be **/app/metadataDb** and **/app/eventsDb**
--
-## Persist events
-
-To enable event persistence, you must first enable metadata persistence either via volume mount or host directory mount using the above sections.
-
-Important things to note about persisting events:
-
-* Persisting events is enabled on a per Event Subscription basis and is opt-in once a volume or directory has been mounted.
-* Event persistence is configured on an Event Subscription at creation time and cannot be modified once the Event Subscription is created. To toggle event persistence, you must delete and re-create the Event Subscription.
-* Persisting events is almost always slower than in memory operations, however the speed difference is highly dependent on the characteristics of the drive. The tradeoff between speed and reliability is inherent to all messaging systems but generally only becomes a noticeable at large scale.
-
-To enable event persistence on an Event Subscription, set `persistencePolicy` to `true`:
-
- ```json
- {
- "properties": {
- "persistencePolicy": {
- "isPersisted": "true"
- },
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-webhook-url>"
- }
- }
- }
- }
- ```
event-grid Persist State Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/persist-state-windows.md
- Title: Persist state in Windows - Azure Event Grid IoT Edge | Microsoft Docs
-description: Persist state in Windows
----- Previously updated : 02/15/2022---
-# Persist state in Windows
-
-Topics and subscriptions created in the Event Grid module are stored in the container file system by default. Without persistence, if the module is redeployed, all the metadata created would be lost. To preserve the data across deployments and restarts, you need to persist the data outside the container file system.
-
-By default only metadata is persisted and events are still stored in-memory for improved performance. Follow the persist events section to enable event persistence as well.
-
-This article provides the steps needed to deploy Event Grid module with persistence in Windows deployments.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-> [!NOTE]
->The Event Grid module runs as a low-privileged user **ContainerUser** in Windows.
-
-## Persistence via volume mount
-
-[Docker volumes](https://docs.docker.com/storage/volumes/) are used to preserve data across deployments. To mount a volume, you need to create it using docker commands, give permissions so that the container can read, write to it, and then deploy the module.
-
-1. Create a volume by running the following command:
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine volume create <your-volume-name-here>
- ```
-
- For example,
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine volume create myeventgridvol
- ```
-1. Get the host directory that the volume maps to by running the below command
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine volume inspect <your-volume-name-here>
- ```
-
- For example,
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine volume inspect myeventgridvol
- ```
-
- Sample Output:-
-
- ```json
- [
- {
- "CreatedAt": "2019-07-30T21:20:59Z",
- "Driver": "local",
- "Labels": {},
- "Mountpoint": "C:\\ProgramData\\iotedge-moby\u000bolumes\\myeventgridvol\\_data",
- "Name": "myeventgridvol",
- "Options": {},
- "Scope": "local"
- }
- ]
- ```
-1. Add the **Users** group to value pointed by **Mountpoint** as follows:
- 1. Launch File Explorer.
- 1. Navigate to the folder pointed by **Mountpoint**.
- 1. Right-click, and then select **Properties**.
- 1. Select **Security**.
- 1. Under *Group or user names, select **Edit**.
- 1. Select **Add**, enter `Users`, select **Check Names**, and select **Ok**.
- 1. Under *Permissions for Users*, select **Modify**, and select **Ok**.
-1. Use **Binds** to mount this volume and redeploy Event Grid module from Azure portal
-
- For example,
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "<your-volume-name-here>:C:\\app\\metadataDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
- >[!IMPORTANT]
- >Do not change the second part of the bind value. It points to a specific location in the module. For Event Grid module on windows, it has to be **C:\\app\\metadataDb**.
--
- For example,
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "myeventgridvol:C:\\app\\metadataDb",
- "C:\\myhostdir2:C:\\app\\eventsDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
-## Persistence via host directory mount
-
-Instead of mounting a volume, you can create a directory on the host system and mount that directory.
-
-1. Create a directory on the host filesystem by running the following command.
-
- ```sh
- mkdir <your-directory-name-here>
- ```
-
- For example,
-
- ```sh
- mkdir C:\myhostdir
- ```
-1. Use **Binds** to mount your directory and redeploy the Event Grid module from Azure portal.
-
- ```json
- {
- "HostConfig": {
- "Binds": [
- "<your-directory-name-here>:C:\\app\\metadataDb"
- ]
- }
- }
- ```
-
- >[!IMPORTANT]
- >Do not change the second part of the bind value. It points to a specific location in the module. For the Event Grid module on windows, it has to be **C:\\app\\metadataDb**.
-
- For example,
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "C:\\myhostdir:C:\\app\\metadataDb",
- "C:\\myhostdir2:C:\\app\\eventsDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-## Persist events
-
-To enable event persistence, you must first enable events persistence either via volume mount or host directory mount using the above sections.
-
-Important things to note about persisting events:
-
-* Persisting events is enabled on a per Event Subscription basis and is opt-in once a volume or directory has been mounted.
-* Event persistence is configured on an Event Subscription at creation time and cannot be modified once the Event Subscription is created. To toggle event persistence, you must delete and re-create the Event Subscription.
-* Persisting events is almost always slower than in memory operations, however the speed difference is highly dependent on the characteristics of the drive. The tradeoff between speed and reliability is inherent to all messaging systems but only becomes a noticeable at large scale.
-
-To enable event persistence on an Event Subscription, set `persistencePolicy` to `true`:
-
- ```json
- {
- "properties": {
- "persistencePolicy": {
- "isPersisted": "true"
- },
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-webhook-url>"
- }
- }
- }
- }
- ```
event-grid Pub Sub Events Webhook Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/pub-sub-events-webhook-cloud.md
- Title: Publish, subscribe to events in cloud - Azure Event Grid IoT Edge | Microsoft Docs
-description: Publish, subscribe to events in cloud using Webhook with Event Grid on IoT Edge
----- Previously updated : 02/15/2022----
-# Tutorial: Publish, subscribe to events in cloud
-
-This article walks through all the steps needed to publish and subscribe to events using Event Grid on IoT Edge. This tutorial uses and Azure Function as the Event Handler. For more destination types, see [event handlers](event-handlers.md).
-
-See [Event Grid Concepts](concepts.md) to understand what an Event Grid topic and subscription are before proceeding.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Prerequisites
-In order to complete this tutorial, you need:
-
-* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one.
-* **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one.
--
-## Create an Azure function in the Azure portal
-
-Follow the steps outlined in the [tutorial](../../azure-functions/functions-get-started.md) to create an Azure function.
-
-Replace the code snippet with the following code:
-
-```csharp
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-using Newtonsoft.Json;
-
-public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
- dynamic data = JsonConvert.DeserializeObject(requestBody);
-
- log.LogInformation($"C# HTTP trigger received {data}.");
- return data != null
- ? (ActionResult)new OkResult()
- : new BadRequestObjectResult("Please pass in the request body");
-}
-```
-
-In your new function, select **Get function URL** at the top right, select default (**Function key**), and then select **Copy**. You use the function URL value later in the tutorial.
-
-> [!NOTE]
-> Refer to the [Azure Functions](../../azure-functions/functions-overview.md) documentation for more samples and tutorials on reacting to events and using EventGrid event triggers.
-
-## Create a topic
-
-As a publisher of an event, you need to create an Event Grid topic. Topic refers to an end point where publishers can send events to.
-
-1. Create topic2.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "name": "sampleTopic2",
- "properties": {
- "inputschema": "eventGridSchema"
- }
- }
- ```
-1. Run the following command to create the topic. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @topic2.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2?api-version=2019-01-01-preview
- ```
-1. Run the following command to verify topic was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- [
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic2",
- "name": "sampleTopic2",
- "type