Updates from: 04/01/2023 01:17:57
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Concepts Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-custom-attributes.md
Azure AD supports adding custom data to resources using [extensions](/graph/exte
- [onPremisesExtensionAttributes](/graph/extensibility-overview?tabs=http#extension-attributes) are a set of 15 attributes that can store extended user string attributes. - [Directory extensions](/graph/extensibility-overview?tabs=http#directory-azure-ad-extensions) allow the schema extension of specific directory objects, such as users and groups, with strongly typed attributes through registration with an application in the tenant.
-Both types of extensions can be configured By using Azure AD Connect for users who are managed on-premises, or MSGraph APIs for cloud-only users.
+Both types of extensions can be configured by using Azure AD Connect for users who are managed on-premises, or Microsoft Graph APIs for cloud-only users.
>[!Note] >The following types of extensions aren't supported for synchronization:
->- Custom Security Attributes in Azure AD (Preview)
->- MSGraph Schema Extensions
->- MSGraph Open Extensions
+>- Custom security attributes in Azure AD (Preview)
+>- Microsoft Graph schema extensions
+>- Microsoft Graph open extensions
## Requirements
To check the backfilling status, click **Azure AD DS Health** and verify the **S
To configure onPremisesExtensionAttributes or directory extensions for cloud-only users in Azure AD, see [Custom data options in Microsoft Graph](/graph/extensibility-overview?tabs=http#custom-data-options-in-microsoft-graph).
-To sync onPremisesExtensionAttributes or directory extensions from on-premises to Azure AD, [configure Azure AD Connect](../active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md).
+To sync onPremisesExtensionAttributes or directory extensions from on-premises to Azure AD, [configure Azure AD Connect](../active-directory/hybrid/how-to-connect-sync-feature-directory-extensions.md).
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Previously updated : 03/30/2023 Last updated : 03/31/2023
When the provisioning service is started, the first cycle will:
5. If a matching user is found, it's updated using the attributes provided by the source system. After the user account is matched, the provisioning service detects and caches the target system's ID for the new user. This ID is used to run all future operations on that user.
-6. If the attribute mappings contain "reference" attributes, the service does additional updates on the target system to create and link the referenced objects. For example, a user may have a "Manager" attribute in the target system, which is linked to another user created in the target system.
+6. If the attribute mappings contain "reference" attributes, the service does more updates on the target system to create and link the referenced objects. For example, a user may have a "Manager" attribute in the target system, which is linked to another user created in the target system.
7. Persist a watermark at the end of the initial cycle, which provides the starting point for the later incremental cycles.
After the initial cycle, all other cycles will:
5. If a matching user is found, it's updated using the attributes provided by the source system. If it's a newly assigned account that is matched, the provisioning service detects and caches the target system's ID for the new user. This ID is used to run all future operations on that user.
-6. If the attribute mappings contain "reference" attributes, the service does additional updates on the target system to create and link the referenced objects. For example, a user may have a "Manager" attribute in the target system, which is linked to another user created in the target system.
+6. If the attribute mappings contain "reference" attributes, the service does more updates on the target system to create and link the referenced objects. For example, a user may have a "Manager" attribute in the target system, which is linked to another user created in the target system.
7. If a user that was previously in scope for provisioning is removed from scope, including being unassigned, the service disables the user in the target system via an update.
After the initial cycle, all other cycles will:
> [!NOTE] > You can optionally disable the **Create**, **Update**, or **Delete** operations by using the **Target object actions** check boxes in the [Mappings](customize-application-attributes.md) section. The logic to disable a user during an update is also controlled via an attribute mapping from a field such as *accountEnabled*.
-The provisioning service continues running back-to-back incremental cycles indefinitely, at intervals defined in the [tutorial specific to each application](../saas-apps/tutorial-list.md). Incremental cycles continue until one of the following events occurs:
+The provisioning service continues running back-to-back incremental cycles indefinitely, at intervals defined in the [tutorial specific to each application](../saas-apps/tutorial-list.md). Incremental cycles continue until one of the events occurs:
- The service is manually stopped using the Azure portal, or using the appropriate Microsoft Graph API command.-- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. This action clears any stored watermark and causes all source objects to be evaluated again. This won't break the links between source and target objects. To break the links use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the following request:
+- A new initial cycle is triggered using the **Restart provisioning** option in the Azure portal, or using the appropriate Microsoft Graph API command. The action clears any stored watermark and causes all source objects to be evaluated again. Also, the action doesn't break the links between source and target objects. To break the links, use [Restart synchronizationJob](/graph/api/synchronization-synchronizationjob-restart?view=graph-rest-beta&tabs=http&preserve-view=true) with the request:
<!-- { "blockType": "request",
Content-type: application/json
} ``` - A new initial cycle is triggered because of a change in attribute mappings or scoping filters. This action also clears any stored watermark and causes all source objects to be evaluated again.-- The provisioning process goes into quarantine (see below) because of a high error rate, and stays in quarantine for more than four weeks. In this event, the service will be automatically disabled.
+- The provisioning process goes into quarantine (see example) because of a high error rate, and stays in quarantine for more than four weeks. In this event, the service will be automatically disabled.
### Errors and retries
Confirm the mapping for *active* for your application. If your using an applicat
**Configure your application to delete a user**
-The following scenarios will trigger a disable or a delete:
+The scenarios will trigger a disable or a delete:
* A user is soft deleted in Azure AD (sent to the recycle bin / AccountEnabled property set to false). 30 days after a user is deleted in Azure AD, they're permanently deleted from the tenant. At this point, the provisioning service sends a DELETE request to permanently delete the user in the application. At any time during the 30-day window, you can [manually delete a user permanently](../fundamentals/active-directory-users-restore.md), which sends a delete request to the application. * A user is permanently deleted / removed from the recycle bin in Azure AD.
The following scenarios will trigger a disable or a delete:
By default, the Azure AD provisioning service soft deletes or disables users that go out of scope. If you want to override this default behavior, you can set a flag to [skip out-of-scope deletions.](skip-out-of-scope-deletions.md)
-If one of the above four events occurs and the target application doesn't support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
+If one of the four events occurs and the target application doesn't support soft deletes, the provisioning service will send a DELETE request to permanently delete the user from the app.
If you see an attribute IsSoftDeleted in your attribute mappings, it's used to determine the state of the user and whether to send an update request with active = false to soft delete the user. **Deprovisioning events**
-The following table describes how you can configure deprovisioning actions with the Azure AD provisioning service. These rules are written with the non-gallery / custom application in mind, but generally apply to applications in the gallery. However, the behavior for gallery applications can differ as they have been optimized to meet the needs of the application. For example, the Azure AD provisioning service may always sende a request to hard delete users in certain applications rather than soft deleting, if the target application doesn't support soft deleting users.
+The table describes how you can configure deprovisioning actions with the Azure AD provisioning service. These rules are written with the non-gallery / custom application in mind, but generally apply to applications in the gallery. However, the behavior for gallery applications can differ as they've been optimized to meet the needs of the application. For example, the Azure AD provisioning service may always sende a request to hard delete users in certain applications rather than soft deleting, if the target application doesn't support soft deleting users.
|Scenario|How to configure in Azure AD| |--|--|
The following table describes how you can configure deprovisioning actions with
**Known limitations**
-* If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app we will send a disable request. At that point, the user isn't managed by the service and we won't send a delete request when they're deleted from the directory.
+* If a user that was previously managed by the provisioning service is unassigned from an app, or from a group assigned to an app then a disable request is sent. At that point, the user isn't managed by the service and a delete request isn't sent when the user is deleted from the directory.
* Provisioning a user that is disabled in Azure AD isn't supported. They must be active in Azure AD before they're provisioned. * When a user goes from soft-deleted to active, the Azure AD provisioning service will activate the user in the target app, but won't automatically restore the group memberships. The target application should maintain the group memberships for the user in inactive state. If the target application doesn't support this, you can restart provisioning to update the group memberships.
active-directory Concept System Preferred Multifactor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-system-preferred-multifactor-authentication.md
description: Learn how to use system-preferred multifactor authentication
Previously updated : 03/22/2023 Last updated : 03/31/2023
Content-Type: application/json
} ```
-## Known issues
+## Known issue
-- [FIDO2 security key isn't supported on mobile devices](../develop/support-fido2-authentication.md#mobile). This issue might surface when system-preferred MFA is enabled. Until a fix is available, we recommend not using FIDO2 security keys on mobile devices.
+[FIDO2 security keys](../develop/support-fido2-authentication.md#mobile) on mobile devices and [registration for certificate-based authentication (CBA)](concept-certificate-based-authentication.md) aren't supported due to an issue that might surface when system-preferred MFA is enabled. Until a fix is available, we recommend not using FIDO2 security keys on mobile devices or registering for CBA. To disable system-preferred MFA for these users, you can either add them to an excluded group or remove them from an included group.
## Common questions
active-directory How To Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md
If you aren't using SSPR and aren't yet using the Authentication methods policy,
### Review the legacy MFA policy
-Start by documenting which methods are available in the legacy MFA policy. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator). Go to **Azure Active Directory** > **Security** > **Multifactor Authentication** > **Additional cloud-based multifactor authentication settings** to view the settings. These settings are tenant-wide, so there's no need for user or group information.
+Start by documenting which methods are available in the legacy MFA policy. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator). Go to **Azure Active Directory** > **Users** > **All users** > **Per-user MFA** > **service settings** to view the settings. These settings are tenant-wide, so there's no need for user or group information.
:::image type="content" border="false" source="media/how-to-authentication-methods-manage/legacy-mfa-policy.png" alt-text="Screenshot the shows the legacy Azure AD MFA policy." lightbox="media/how-to-authentication-methods-manage/legacy-mfa-policy.png":::
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
To enable the certificate-based authentication and configure user bindings in th
1. To delete a CA certificate, select the certificate and click **Delete**. 1. Click **Columns** to add or delete columns.
-### Configure certification authorities using PowerShell
+>[!NOTE]
+>Upload of new CAs will fail when any of the existing CAs are expired. Tenant Admin should delete the expired CAs and then upload the new CA.
+
+### Configure certification authorities(CA) using PowerShell
Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can only be HTTP URLs. Online Certificate Status Protocol (OCSP) or Lightweight Directory Access Protocol (LDAP) URLs aren't supported.
Only one CRL Distribution Point (CDP) for a trusted CA is supported. The CDP can
[!INCLUDE [Get-AzureAD](../../../includes/active-directory-authentication-get-trusted-azuread.md)] ### Add
+>[!NOTE]
+>Upload of new CAs will fail when any of the existing CAs are expired. Tenant Admin should delete the expired CAs and then upload the new CA.
+ [!INCLUDE [New-AzureAD](../../../includes/active-directory-authentication-new-trusted-azuread.md)] **AuthorityType**
active-directory Product Privileged Role Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-privileged-role-insights.md
+
+ Title: View privileged role assignments in Azure AD Insights
+description: How to view current privileged role assignments in the Azure AD Insights tab.
+++++++ Last updated : 03/31/2023+++
+# View privileged role assignments in your organization (Preview)
+
+The **Azure AD Insights** tab shows you who is assigned to privileged roles in your organization. You can review a list of identities assigned to a privileged role and learn more about each identity.
+
+> [!NOTE]
+> Microsoft recommends that you keep two break glass accounts permanently assigned to the global administrator role. Make sure that these accounts don't require the same multi-factor authentication mechanism to sign in as other administrative accounts. This is described further in [Manage emergency access accounts in Microsoft Entra](../roles/security-emergency-access.md).
+
+> [!NOTE]
+> Keep role assignments permanent if a user has a an additional Microsoft account (for example, an account they use to sign in to Microsoft services like Skype, or Outlook.com). If you require multi-factor authentication to activate a role assignment, a user with an additional Microsoft account will be locked out.
+
+## View information in the Azure AD Insights tab
+
+1. From the Permissions Management home page, select the **Azure AD Insights** tab.
+2. Select **Review global administrators** to review the list of Global administrator role assignments.
+3. Select **Review highly privileged roles** or **Review service principals** to review information on principal role assignments for the following roles: *Application administrator*, *Cloud Application administrator*, *Exchange administrator*, *Intune administrator*, *Privileged role administrator*, *SharePoint administrator*, *Security administrator*, *User administrator*.
++
+## Next steps
+
+- For information about managing roles, policies and permissions requests in your organization, see [View roles/policies and requests for permission in the Remediation dashboard](ui-remediation.md).
active-directory Troubleshoot Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md
Previously updated : 08/16/2022 Last updated : 03/31/2023
Organizations should avoid the following configurations:
The first way is to review the error message that appears. For problems signing in when using a web browser, the error page itself has detailed information. This information alone may describe what the problem is and that may suggest a solution.
-![Sign in error - compliant device required](./media/troubleshoot-conditional-access/image1.png)
+![Screenshot showing a sign in error where a compliant device is required.](./media/troubleshoot-conditional-access/image1.png)
In the above error, the message states that the application can only be accessed from devices or client applications that meet the company's mobile device management policy. In this case, the application and device don't meet that policy.
In the above error, the message states that the application can only be accessed
The second method to get detailed information about the sign-in interruption is to review the Azure AD sign-in events to see which Conditional Access policy or policies were applied and why.
-More information can be found about the problem by clicking **More Details** in the initial error page. Clicking **More Details** will reveal troubleshooting information that is helpful when searching the Azure AD sign-in events for the specific failure event the user saw or when opening a support incident with Microsoft.
+More information can be found about the problem by clicking **More Details** in the initial error page. Clicking **More Details** reveals troubleshooting information that is helpful when searching the Azure AD sign-in events for the specific failure event the user saw or when opening a support incident with Microsoft.
-![More details from a Conditional Access interrupted web browser sign-in.](./media/troubleshoot-conditional-access/image2.png)
+![Screenshot showing more details from a Conditional Access interrupted web browser sign-in.](./media/troubleshoot-conditional-access/image2.png)
To find out which Conditional Access policy or policies applied and why do the following.
To find out which Conditional Access policy or policies applied and why do the f
1. **Username** to see information related to specific users. 1. **Date** scoped to the time frame in question.
- ![Selecting the Conditional access filter in the sign-ins log](./media/troubleshoot-conditional-access/image3.png)
+ ![Screenshot showing selecting the Conditional access filter in the sign-ins log.](./media/troubleshoot-conditional-access/image3.png)
-1. Once the sign-in event that corresponds to the user's sign-in failure has been found select the **Conditional Access** tab. The Conditional Access tab will show the specific policy or policies that resulted in the sign-in interruption.
+1. Once the sign-in event that corresponds to the user's sign-in failure has been found select the **Conditional Access** tab. The Conditional Access tab shows the specific policy or policies that resulted in the sign-in interruption.
1. Information in the **Troubleshooting and support** tab may provide a clear reason as to why a sign-in failed such as a device that didn't meet compliance requirements.
- 1. To investigate further, drill down into the configuration of the policies by clicking on the **Policy Name**. Clicking the **Policy Name** will show the policy configuration user interface for the selected policy for review and editing.
+ 1. To investigate further, drill down into the configuration of the policies by clicking on the **Policy Name**. Clicking the **Policy Name** shows the policy configuration user interface for the selected policy for review and editing.
1. The **client user** and **device details** that were used for the Conditional Access policy assessment are also available in the **Basic Info**, **Location**, **Device Info**, **Authentication Details**, and **Additional Details** tabs of the sign-in event. ### Policy not working as intended Selecting the ellipsis on the right side of the policy in a sign-in event brings up policy details. This option gives administrators additional information about why a policy was successfully applied or not.
- ![Sign in event Conditional Access tab](./media/troubleshoot-conditional-access/image5.png)
-
- ![Policy details (preview)](./media/troubleshoot-conditional-access/policy-details.png)
The left side provides details collected at sign-in and the right side provides details of whether those details satisfy the requirements of the applied Conditional Access policies. Conditional Access policies only apply when all conditions are satisfied or not configured. If the information in the event isn't enough to understand the sign-in results, or adjust the policy to get desired results, the sign-in diagnostic tool can be used. The sign-in diagnostic can be found under **Basic info** > **Troubleshoot Event**. For more information about the sign-in diagnostic, see the article [What is the sign-in diagnostic in Azure AD](../reports-monitoring/overview-sign-in-diagnostics.md). You can also [use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md).
-If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information will allow Microsoft support to find the specific event you're concerned about.
+If you need to submit a support incident, provide the request ID and time and date from the sign-in event in the incident submission details. This information allows Microsoft support to find the specific event you're concerned about.
### Common Conditional Access error codes
More information about error codes can be found in the article [Azure AD Authent
## Service dependencies
-In some specific scenarios, users are blocked because there are cloud apps with dependencies on resources that are blocked by Conditional Access policy.
+In some specific scenarios, users are blocked because there are cloud apps with dependencies on resources blocked by Conditional Access policy.
To determine the service dependency, check the sign-ins log for the application and resource called by the sign-in. In the following screenshot, the application called is **Azure Portal** but the resource called is **Windows Azure Service Management API**. To target this scenario appropriately all the applications and resources should be similarly combined in Conditional Access policy.
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Previously updated : 12/28/2022 Last updated : 03/29/2023 --+ # Microsoft identity platform access tokens
-Access tokens enable clients to securely call protected web APIs. Access tokens are used by web APIs to perform authentication and authorization.
+Access tokens enable clients to securely call protected web APIs. Web APIs use access tokens to perform authentication and authorization.
-Per the OAuth specification, access tokens are opaque strings without a set format. Some identity providers (IDPs) use GUIDs and others use encrypted blobs. The format of the access token can depend on how the API that accepts the token is configured.
+Per the OAuth specification, access tokens are opaque strings without a set format. Some identity providers (IDPs) use GUIDs and others use encrypted blobs. The format of the access token can depend on the configuration of the API that accepts it.
-Custom APIs registered by developers on the Microsoft identity platform can choose from two different formats of JSON Web Tokens (JWTs) called `v1.0` and `v2.0`. Microsoft-developed APIs like Microsoft Graph or APIs in Azure have other proprietary token formats. These proprietary formats might be encrypted tokens, JWTs, or special JWT-like tokens that won't validate.
+Custom APIs registered by developers on the Microsoft identity platform can choose from two different formats of JSON Web Tokens (JWTs) called `v1.0` and `v2.0`. Microsoft-developed APIs like Microsoft Graph or APIs in Azure have other proprietary token formats. These proprietary formats that can't be validated might be encrypted tokens, JWTs, or special JWT-like.
-Clients must treat access tokens as opaque strings because the contents of the token are intended for the API only. For validation and debugging purposes *only*, developers can decode JWTs using a site like [jwt.ms](https://jwt.ms). Tokens that are received for a Microsoft API might not always be a JWT and can't always be decoded.
+The contents of the token are intended only for the API, which means that access tokens must be treated as opaque strings. For validation and debugging purposes *only*, developers can decode JWTs using a site like [jwt.ms](https://jwt.ms). Tokens that a Microsoft API receives might not always be a JWT that can be decoded.
-For details on what's inside the access token, clients should use the token response data that's returned with the access token to the client. When the client requests an access token, the Microsoft identity platform also returns some metadata about the access token for the consumption of the application. This information includes the expiry time of the access token and the scopes for which it's valid. This data allows the application to do intelligent caching of access tokens without having to parse the access token itself.
+Clients should use the token response data that's returned with the access token for details on what's inside it. When the client requests an access token, the Microsoft identity platform also returns some metadata about the access token for the consumption of the application. This information includes the expiry time of the access token and the scopes for which it's valid. This data allows the application to do intelligent caching of access tokens without having to parse the access token itself.
See the following sections to learn how an API can validate and use the claims inside an access token.
There are two versions of access tokens available in the Microsoft identity plat
Web APIs have one of the following versions selected as a default during registration: -- v1.0 for Azure AD-only applications. The following example shows a v1.0 token (this token example won't validate because the keys have rotated prior to publication and personal information has been removed):
+- v1.0 for Azure AD-only applications. The following example shows a v1.0 token (the keys are changed and personal information is removed, which prevents token validation):
``` eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSIsImtpZCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSJ9.eyJhdWQiOiJlZjFkYTlkNC1mZjc3LTRjM2UtYTAwNS04NDBjM2Y4MzA3NDUiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9mYTE1ZDY5Mi1lOWM3LTQ0NjAtYTc0My0yOWYyOTUyMjIyOS8iLCJpYXQiOjE1MzcyMzMxMDYsIm5iZiI6MTUzNzIzMzEwNiwiZXhwIjoxNTM3MjM3MDA2LCJhY3IiOiIxIiwiYWlvIjoiQVhRQWkvOElBQUFBRm0rRS9RVEcrZ0ZuVnhMaldkdzhLKzYxQUdyU091TU1GNmViYU1qN1hPM0libUQzZkdtck95RCtOdlp5R24yVmFUL2tES1h3NE1JaHJnR1ZxNkJuOHdMWG9UMUxrSVorRnpRVmtKUFBMUU9WNEtjWHFTbENWUERTL0RpQ0RnRTIyMlRJbU12V05hRU1hVU9Uc0lHdlRRPT0iLCJhbXIiOlsid2lhIl0sImFwcGlkIjoiNzVkYmU3N2YtMTBhMy00ZTU5LTg1ZmQtOGMxMjc1NDRmMTdjIiwiYXBwaWRhY3IiOiIwIiwiZW1haWwiOiJBYmVMaUBtaWNyb3NvZnQuY29tIiwiZmFtaWx5X25hbWUiOiJMaW5jb2xuIiwiZ2l2ZW5fbmFtZSI6IkFiZSAoTVNGVCkiLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC83MmY5ODhiZi04NmYxLTQxYWYtOTFhYi0yZDdjZDAxMjIyNDcvIiwiaXBhZGRyIjoiMjIyLjIyMi4yMjIuMjIiLCJuYW1lIjoiYWJlbGkiLCJvaWQiOiIwMjIyM2I2Yi1hYTFkLTQyZDQtOWVjMC0xYjJiYjkxOTQ0MzgiLCJyaCI6IkkiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJzdWIiOiJsM19yb0lTUVUyMjJiVUxTOXlpMmswWHBxcE9pTXo1SDNaQUNvMUdlWEEiLCJ0aWQiOiJmYTE1ZDY5Mi1lOWM3LTQ0NjAtYTc0My0yOWYyOTU2ZmQ0MjkiLCJ1bmlxdWVfbmFtZSI6ImFiZWxpQG1pY3Jvc29mdC5jb20iLCJ1dGkiOiJGVnNHeFlYSTMwLVR1aWt1dVVvRkFBIiwidmVyIjoiMS4wIn0.D3H6pMUtQnoJAGq6AHd ``` -- v2.0 for applications that support consumer accounts. The following example shows a v2.0 token (this token example won't validate because the keys have rotated prior to publication and personal information has been removed):
+- v2.0 for applications that support consumer accounts. The following example shows a v2.0 token (the keys are changed and personal information is removed, which prevents token validation):
``` eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSJ9.eyJhdWQiOiI2ZTc0MTcyYi1iZTU2LTQ4NDMtOWZmNC1lNjZhMzliYjEyZTMiLCJpc3MiOiJodHRwczovL2xvZ2luLm1pY3Jvc29mdG9ubGluZS5jb20vNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3L3YyLjAiLCJpYXQiOjE1MzcyMzEwNDgsIm5iZiI6MTUzNzIzMTA0OCwiZXhwIjoxNTM3MjM0OTQ4LCJhaW8iOiJBWFFBaS84SUFBQUF0QWFaTG8zQ2hNaWY2S09udHRSQjdlQnE0L0RjY1F6amNKR3hQWXkvQzNqRGFOR3hYZDZ3TklJVkdSZ2hOUm53SjFsT2NBbk5aY2p2a295ckZ4Q3R0djMzMTQwUmlvT0ZKNGJDQ0dWdW9DYWcxdU9UVDIyMjIyZ0h3TFBZUS91Zjc5UVgrMEtJaWpkcm1wNjlSY3R6bVE9PSIsImF6cCI6IjZlNzQxNzJiLWJlNTYtNDg0My05ZmY0LWU2NmEzOWJiMTJlMyIsImF6cGFjciI6IjAiLCJuYW1lIjoiQWJlIExpbmNvbG4iLCJvaWQiOiI2OTAyMjJiZS1mZjFhLTRkNTYtYWJkMS03ZTRmN2QzOGU0NzQiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhYmVsaUBtaWNyb3NvZnQuY29tIiwicmgiOiJJIiwic2NwIjoiYWNjZXNzX2FzX3VzZXIiLCJzdWIiOiJIS1pwZmFIeVdhZGVPb3VZbGl0anJJLUtmZlRtMjIyWDVyclYzeERxZktRIiwidGlkIjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3IiwidXRpIjoiZnFpQnFYTFBqMGVRYTgyUy1JWUZBQSIsInZlciI6IjIuMCJ9.pj4N-w_3Us9DrBLfpCt ```
-The version can be set for applications by providing the appropriate value to the `accessTokenAcceptedVersion` setting in the [app manifest](reference-app-manifest.md#manifest-reference). The values of `null` and `1` result in v1.0 tokens, and the value of `2` results in v2.0 tokens.
+Set the version for applications by providing the appropriate value to the `accessTokenAcceptedVersion` setting in the [app manifest](reference-app-manifest.md#manifest-reference). The values of `null` and `1` result in v1.0 tokens, and the value of `2` results in v2.0 tokens.
## Token ownership
-Two parties are involved in an access token request: the client, who requests the token, and the resource (Web API) that accepts the token. The `aud` claim in a token indicates the resource that the token is intended for (its *audience*). Clients use the token but shouldn't understand or attempt to parse it. Resources accept the token.
+An access token request involves two parties: the client, who requests the token, and the resource (Web API) that accepts the token. The resource that the token is intended for (its *audience*) is defined in the `aud` claim in a token. Clients use the token but shouldn't understand or attempt to parse it. Resources accept the token.
-The Microsoft identity platform supports issuing any token version from any version endpoint - they aren't related. When `accessTokenAcceptedVersion` is set to `2`, a client calling the v1.0 endpoint to get a token for that resource receives a v2.0 access token.
+The Microsoft identity platform supports issuing any token version from any version endpoint. For example, when the value of `accessTokenAcceptedVersion` is `2`, a client calling the v1.0 endpoint to get a token for that resource receives a v2.0 access token.
Resources always own their tokens using the `aud` claim and are the only applications that can change their token details. ## Claims in access tokens
-JWTs are split into three pieces:
+JWTs contain the following pieces:
-- **Header** - Provides information about how to validate the token including information about the type of token and how it was signed.
+- **Header** - Provides information about how to validate the token including information about the type of token and its signing method.
- **Payload** - Contains all of the important data about the user or application that's attempting to call the service. - **Signature** - Is the raw material used to validate the token.
Each piece is separated by a period (`.`) and separately Base64 encoded.
Claims are present only if a value exists to fill it. An application shouldn't take a dependency on a claim being present. Examples include `pwd_exp` (not every tenant requires passwords to expire) and `family_name` ([client credential](v2-oauth2-client-creds-grant-flow.md) flows are on behalf of applications that don't have names). Claims used for access token validation are always present.
-Some claims are used to help the Microsoft identity platform secure tokens for reuse. These claims are marked as not being for public consumption in the description as `Opaque`. These claims may or may not appear in a token, and new ones may be added without notice.
+The Microsoft identity platform uses some claims to help secure tokens for reuse. The description of `Opaque` marks these claims as not being for public consumption. These claims may or may not appear in a token, and new ones may be added without notice.
### Header claims | Claim | Format | Description | |-|--|-| | `typ` | String - always `JWT` | Indicates that the token is a JWT.|
-| `alg` | String | Indicates the algorithm that was used to sign the token, for example, `RS256`. |
-| `kid` | String | Specifies the thumbprint for the public key that can be used to validate this signature of the token. Emitted in both v1.0 and v2.0 access tokens. |
+| `alg` | String | Indicates the algorithm used to sign the token, for example, `RS256`. |
+| `kid` | String | Specifies the thumbprint for the public key used for validating the signature of the token. Emitted in both v1.0 and v2.0 access tokens. |
| `x5t` | String | Functions the same (in use and value) as `kid`. `x5t` and is a legacy claim emitted only in v1.0 access tokens for compatibility purposes. | ### Payload claims
-| Claim | Format | Description | Authorization considerations |
-|-|--|-||
-| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. | This value must be validated, reject the token if the value doesn't match the intended audience. |
-| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant in which the user was authenticated. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. | The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
-|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. If the claim isn't present, the value of `iss` can be used instead. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. | |
-| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. | |
-| `nbf` | int, a Unix timestamp | Specifies the time before which the JWT must not be accepted for processing. | |
-| `exp` | int, a Unix timestamp | Specifies the expiration time on or after which the JWT must not be accepted for processing. A resource may reject the token before this time as well. The rejection can occur when a change in authentication is required or a token revocation has been detected. | |
-| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. | |
-| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. | |
-| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies how the subject of the token was authenticated. | |
-| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `appid` may be used in authorization decisions. |
-| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. | `azp` may be used in authorization decisions. |
-| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates how the client was authenticated. For a public client, the value is `0`. If client ID and client secret are used, the value is `1`. If a client certificate was used for authentication, the value is `2`. | |
-| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates how the client was authenticated. For a public client, the value is `0`. If client ID and client secret are used, the value is `1`. If a client certificate was used for authentication, the value is `2`. | |
-| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. The value is mutable and might change over time. The value can be used for username hints, however, and in human-readable UI as a username. The `profile` scope is required in order to receive this claim. | Since this value is mutable, it must not be used to make authorization decisions. |
-| `name` | String | Provides a human-readable value that identifies the subject of the token. The value isn't guaranteed to be unique, it's mutable, and is only used for display purposes. The `profile` scope is required in order to receive this claim. | This value must not be used to make authorization decisions. |
-| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. Only included for user tokens. | The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. |
-| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. For application tokens, this set of permissions is used during the [client credential flow](v2-oauth2-client-creds-grant-flow.md) in place of user scopes. For user tokens, this set of values is populated with the roles the user was assigned to on the target application. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). This claim is configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). Setting it to `All` or `DirectoryRole` is required. May not be present in tokens obtained through the implicit flow due to token length concerns. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. The groups included in the groups claim are configured on a per-application basis, through the `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md). A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. | These values can be used for managing access, such as enforcing authorization to access a resource. |
-| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. | |
-| `groups:src1` | JSON object | For token requests that aren't length limited (see `hasgroups`) but still too large for the token, a link to the full groups list for the user is included. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` | |
-| `sub` | String | The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. The subject is a pairwise identifier that is unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Two different values may or may not be desired depending on architecture and privacy requirements. See also the `oid` claim (which does remain the same across applications within a tenant). | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
-| `oid` | String, a GUID | The immutable identifier for the requestor, which is the user or service principal whose identity has been verified. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, the `profile` scope is required in order to receive this claim for users. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. The accounts are considered different, even though the user logs into each account with the same credentials. | This value can be used to perform authorization checks, such as when the token is used to access a resource, and can be used as a key in database tables. |
-| `tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. | This value should be considered in combination with other claims in authorization decisions. |
-| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. | This value isn't guaranteed to be unique within a tenant and should be used only for display purposes. |
-| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. | |
-| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. | |
-| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. | |
+| Claim | Format | Description |
+|-|--|-|
+| `aud` | String, an Application ID URI or GUID | Identifies the intended audience of the token. The API must validate this value and reject the token if the value doesn't match. In v2.0 tokens, this value is always the client ID of the API. In v1.0 tokens, it can be the client ID or the resource URI used in the request. The value can depend on how the client requested the token. |
+| `iss` | String, a security token service (STS) URI | Identifies the STS that constructs and returns the token, and the Azure AD tenant of the authenticated user. If the token issued is a v2.0 token (see the `ver` claim), the URI ends in `/v2.0`. The GUID that indicates that the user is a consumer user from a Microsoft account is `9188040d-6c67-4c5b-b112-36a304b66dad`. The application can use the GUID portion of the claim to restrict the set of tenants that can sign in to the application, if applicable. |
+|`idp`| String, usually an STS URI | Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account isn't in the same tenant as the issuer, such as guests. Use the value of `iss` if the claim isn't present. For personal accounts being used in an organizational context (for instance, a personal account invited to an Azure AD tenant), the `idp` claim may be 'live.com' or an STS URI containing the Microsoft account tenant `9188040d-6c67-4c5b-b112-36a304b66dad`. |
+| `iat` | int, a Unix timestamp | Specifies when the authentication for this token occurred. |
+| `nbf` | int, a Unix timestamp | Specifies the time after which the JWT can be processed. |
+| `exp` | int, a Unix timestamp | Specifies the expiration time on or after which the JWT must not be accepted for processing. A resource may reject the token before this time as well. The rejection can occur for a required change in authentication or when a token is revoked. |
+| `aio` | Opaque String | An internal claim used by Azure AD to record data for token reuse. Resources shouldn't use this claim. |
+| `acr` | String, a `0` or `1`, only present in v1.0 tokens | A value of `0` for the "Authentication context class" claim indicates the end-user authentication didn't meet the requirements of ISO/IEC 29115. |
+| `amr` | JSON array of strings, only present in v1.0 tokens | Identifies the authentication method of the subject of the token. |
+| `appid` | String, a GUID, only present in v1.0 tokens | The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. |
+| `azp` | String, a GUID, only present in v2.0 tokens | A replacement for `appid`. The application ID of the client using the token. The application can act as itself or on behalf of a user. The application ID typically represents an application object, but it can also represent a service principal object in Azure AD. |
+| `appidacr` | String, a `0`, `1`, or `2`, only present in v1.0 tokens | Indicates authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. |
+| `azpacr` | String, a `0`, `1`, or `2`, only present in v2.0 tokens | A replacement for `appidacr`. Indicates the authentication method of the client. For a public client, the value is `0`. When you use the client ID and client secret, the value is `1`. When you use a client certificate for authentication, the value is `2`. |
+| `preferred_username` | String, only present in v2.0 tokens. | The primary username that represents the user. The value could be an email address, phone number, or a generic username without a specified format. The value is mutable and might change over time. Since the value is mutable, don't use it to make authorization decisions. Use the value for username hints and in human-readable UI as a username. To receive this claim, use the `profile` scope. |
+| `name` | String | Provides a human-readable value that identifies the subject of the token. The value can vary, it's mutable, and is for display purposes only. To receive this claim, use the `profile` scope. |
+| `scp` | String, a space separated list of scopes | The set of scopes exposed by the application for which the client application has requested (and received) consent. The application should verify that these scopes are valid ones exposed by the application, and make authorization decisions based on the value of these scopes. Only included for user tokens. |
+| `roles` | Array of strings, a list of permissions | The set of permissions exposed by the application that the requesting application or user has been given permission to call. The [client credential flow](v2-oauth2-client-creds-grant-flow.md) uses this set of permission in place of user scopes for application tokens. For user tokens, this set of values contains the assigned roles of the user on the target application. |
+| `wids` | Array of [RoleTemplateID](../roles/permissions-reference.md#all-roles) GUIDs | Denotes the tenant-wide roles assigned to this user, from the section of roles present in [Azure AD built-in roles](../roles/permissions-reference.md#all-roles). The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures this claim on a per-application basis. Set the claim to `All` or `DirectoryRole`. May not be present in tokens obtained through the implicit flow due to token length concerns. |
+| `groups` | JSON array of GUIDs | Provides object IDs that represent the group memberships of the subject. Safely use these unique values for managing access, such as enforcing authorization to access a resource. The `groupMembershipClaims` property of the [application manifest](reference-app-manifest.md) configures the groups claim on a per-application basis. A value of `null` excludes all groups, a value of `SecurityGroup` includes only Active Directory Security Group memberships, and a value of `All` includes both Security Groups and Microsoft 365 Distribution Lists. <br><br>See the `hasgroups` claim for details on using the `groups` claim with the implicit grant. For other flows, if the number of groups the user is in goes over 150 for SAML and 200 for JWT, then Azure AD adds an overage claim to the claim sources. The claim sources point to the Microsoft Graph endpoint that contains the list of groups for the user. |
+| `hasgroups` | Boolean | If present, always `true`, indicates whether the user is in at least one group. Used in place of the `groups` claim for JWTs in implicit grant flows if the full groups claim would extend the URI fragment beyond the URL length limits (currently six or more groups). Indicates that the client should use the Microsoft Graph API to determine the groups (`https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects`) of the user. |
+| `groups:src1` | JSON object | Includes a link to the full groups list for the user when token requests are too large for the token. For JWTs as a distributed claim, for SAML as a new claim in place of the `groups` claim. <br><br>**Example JWT Value**: <br> `"groups":"src1"` <br> `"_claim_sources`: `"src1" : { "endpoint" : "https://graph.microsoft.com/v1.0/users/{userID}/getMemberObjects" }` |
+| `sub` | String | The principal associated with the token. For example, the user of an application. This value is immutable, don't reassign or reuse. Use it to perform authorization checks safely, such as when using the token to access a resource, and can be used as a key in database tables. Because the subject is always present in the tokens that Azure AD issues, use this value in a general-purpose authorization system. The subject is a pairwise identifier that's unique to a particular application ID. If a single user signs into two different applications using two different client IDs, those applications receive two different values for the subject claim. Using the two different values depends on architecture and privacy requirements. See also the `oid` claim, which does remain the same across applications within a tenant. |
+| `oid` | String, a GUID | The immutable identifier for the requestor, which is the verified identity of the user or service principal. Use this value to also perform authorization checks safely and as a key in database tables. This ID uniquely identifies the requestor across applications. Two different applications signing in the same user receive the same value in the `oid` claim. The `oid` can be used when making queries to Microsoft online services, such as the Microsoft Graph. The Microsoft Graph returns this ID as the `id` property for a given user account. Because the `oid` allows multiple applications to correlate principals, to receive this claim for users use the `profile` scope. If a single user exists in multiple tenants, the user contains a different object ID in each tenant. Even though the user logs into each account with the same credentials, the accounts are different. |
+|`tid` | String, a GUID | Represents the tenant that the user is signing in to. For work and school accounts, the GUID is the immutable tenant ID of the organization that the user is signing in to. For sign-ins to the personal Microsoft account tenant (services like Xbox, Teams for Life, or Outlook), the value is `9188040d-6c67-4c5b-b112-36a304b66dad`. To receive this claim, the application must request the `profile` scope. |
+| `unique_name` | String, only present in v1.0 tokens | Provides a human readable value that identifies the subject of the token. This value can be different within a tenant and use it only for display purposes. |
+| `uti` | String | Token identifier claim, equivalent to `jti` in the JWT specification. Unique, per-token identifier that is case-sensitive. |
+| `rh` | Opaque String | An internal claim used by Azure to revalidate tokens. Resources shouldn't use this claim. |
+| `ver` | String, either `1.0` or `2.0` | Indicates the version of the access token. |
#### Groups overage claim
Use the `BulkCreateGroups.ps1` provided in the [App Creation Scripts](https://gi
#### v1.0 basic claims
-The following claims are included in v1.0 tokens if applicable, but aren't included in v2.0 tokens by default. To use these claims for v2.0, the application requests them using [optional claims](active-directory-optional-claims.md).
+The v1.0 tokens include the following claims if applicable, but not v2.0 tokens by default. To use these claims for v2.0, the application requests them using [optional claims](active-directory-optional-claims.md).
| Claim | Format | Description | |-|--|-| | `ipaddr`| String | The IP address the user authenticated from. | | `onprem_sid`| String, in [SID format](/windows/desktop/SecAuthZ/sid-components) | In cases where the user has an on-premises authentication, this claim provides their SID. Use this claim for authorization in legacy applications. | | `pwd_exp`| int, a Unix timestamp | Indicates when the user's password expires. |
-| `pwd_url`| String | A URL where users can be sent to reset their password. |
-| `in_corp`| boolean | Signals if the client is signing in from the corporate network. If they aren't, the claim isn't included. |
+| `pwd_url`| String | A URL where users can reset their password. |
+| `in_corp`| boolean | Signals if the client is signing in from the corporate network. |
| `nickname`| String | Another name for the user, separate from first or last name.| | `family_name` | String | Provides the last name, surname, or family name of the user as defined on the user object. | | `given_name` | String | Provides the first or given name of the user, as set on the user object. |
-| `upn` | String | The username of the user. May be a phone number, email address, or unformatted string. Should only be used for display purposes and providing username hints in reauthentication scenarios. |
+| `upn` | String | The username of the user. May be a phone number, email address, or unformatted string. Only use for display purposes and providing username hints in reauthentication scenarios. |
#### amr claim
Identities can authenticate in different ways, which may be relevant to the appl
| Value | Description | |--|-| | `pwd` | Password authentication, either a user's Microsoft password or a client secret of an application. |
-| `rsa` | Authentication was based on the proof of an RSA key, for example with the [Microsoft Authenticator app](https://aka.ms/AA2kvvu). This value also indicates whether authentication was done by a self-signed JWT with a service owned X509 certificate. |
+| `rsa` | Authentication was based on the proof of an RSA key, for example with the [Microsoft Authenticator app](https://aka.ms/AA2kvvu). This value also indicates the use of a self-signed JWT with a service owned X509 certificate in authentication. |
| `otp` | One-time passcode using an email or a text message. |
-| `fed` | A federated authentication assertion (such as JWT or SAML) was used. |
+| `fed` | Indicates the use of a federated authentication assertion (such as JWT or SAML). |
| `wia` | Windows Integrated Authentication |
-| `mfa` | [Multi-factor authentication](../authentication/concept-mfa-howitworks.md) was used. When this claim is present, the other authentication methods are included. |
+| `mfa` | Indicates the use of [Multi-factor authentication](../authentication/concept-mfa-howitworks.md). Includes the other authentication methods when this claim is present. |
| `ngcmfa` | Equivalent to `mfa`, used for provisioning of certain advanced credential types. | | `wiaormfa`| The user used Windows or an MFA credential to authenticate. |
-| `none` | No authentication was done. |
+| `none` | Indicates no completed authentication. |
## Access token lifetime
-The default lifetime of an access token is variable. When issued, the default lifetime of an access token is assigned a random value ranging between 60-90 minutes (75 minutes on average). The variation improves service resilience by spreading access token demand over a time, which prevents hourly spikes in traffic to Azure AD.
+The default lifetime of an access token is variable. When issued, the Microsoft identity platform assigns a random value ranging between 60-90 minutes (75 minutes on average) as the default lifetime of an access token. The variation improves service resilience by spreading access token demand over a time, which prevents hourly spikes in traffic to Azure AD.
-Tenants that donΓÇÖt use Conditional Access have a default access token lifetime of two hours for clients such as Microsoft Teams and Microsoft 365.
+Tenants that don't use Conditional Access have a default access token lifetime of two hours for clients such as Microsoft Teams and Microsoft 365.
-The lifetime of an access token can be adjusted to control how often the client application expires the application session, and how often it requires the user to reauthenticate (either silently or interactively). To override the default access token lifetime variation, set a static default access token lifetime by using [Configurable token lifetime (CTL)](active-directory-configurable-token-lifetimes.md).
+Adjust the lifetime of an access token to control how often the client application expires the application session, and how often it requires the user to reauthenticate (either silently or interactively). To override the default access token lifetime variation, use [Configurable token lifetime (CTL)](active-directory-configurable-token-lifetimes.md).
-Default token lifetime variation is applied to organizations that have Continuous Access Evaluation (CAE) enabled. Default token lifetime variation is applied even if the organizations use CTL policies. The default token lifetime for long lived token lifetime ranges from 20 to 28 hours. When the access token expires, the client must use the refresh token to silently acquire a new refresh token and access token.
+Apply default token lifetime variation to organizations that have Continuous Access Evaluation (CAE) enabled. Apply default token lifetime variation even if the organizations use CTL policies. The default token lifetime for long lived token lifetime ranges from 20 to 28 hours. When the access token expires, the client must use the refresh token to silently acquire a new refresh token and access token.
Organizations that use [Conditional Access sign-in frequency (SIF)](../conditional-access/howto-conditional-access-session-lifetime.md#user-sign-in-frequency) to enforce how frequently sign-ins occur can't override default access token lifetime variation. When organizations use SIF, the time between credential prompts for a client is the token lifetime that ranges from 60 - 90 minutes plus the sign-in frequency interval.
-Here's an example of how default token lifetime variation works with sign-in frequency. Let's say an organization sets sign-in frequency to occur every hour. The actual sign-in interval occurs anywhere between 1 hour to 2.5 hours because the token is issued with lifetime ranging from 60-90 minutes (due to token lifetime variation).
+Here's an example of how default token lifetime variation works with sign-in frequency. Let's say an organization sets sign-in frequency to occur every hour. When the token has lifetime ranging from 60-90 minutes due to token lifetime variation, the actual sign-in interval occurs anywhere between 1 hour to 2.5 hours.
-If a user with a token with a one hour lifetime performs an interactive sign-in at 59 minutes (just before the sign-in frequency being exceeded), there's no credential prompt because the sign-in is below the SIF threshold. If a new token is issued with a lifetime of 90 minutes, the user wouldn't see a credential prompt for another hour and a half. When a silent renewal attempted of the 90-minute token lifetime is made, Azure AD requires a credential prompt because the total session length has exceeded the sign-in frequency setting of 1 hour. In this example, the time difference between credential prompts due to the SIF interval and token lifetime variation would be 2.5 hours.
+If a user with a token with a one hour lifetime performs an interactive sign-in at 59 minutes, there's no credential prompt because the sign-in is below the SIF threshold. If a new token has a lifetime of 90 minutes, the user wouldn't see a credential prompt for another hour and a half. During a silent renewal attempt, Azure AD requires a credential prompt because the total session length has exceeded the sign-in frequency setting of 1 hour. In this example, the time difference between credential prompts due to the SIF interval and token lifetime variation would be 2.5 hours.
## Validate tokens
Not all applications should validate tokens. Only in specific scenarios should a
- Web APIs must validate access tokens sent to them by a client. They must only accept tokens containing their `aud` claim. - Confidential web applications like ASP.NET Core must validate ID tokens sent to them by using the user's browser in the hybrid flow, before allowing access to a user's data or establishing a session.
-If none of the above scenarios apply, the application won't benefit from validating the token, and may present a security and reliability risk if decisions are made based on the validity of the token. Public clients like native or single-page applications don't benefit from validating tokens because the application communicates directly with the IDP where SSL protection ensures the tokens are valid.
+If none of the above scenarios apply, there's no need to validate the token, and may present a security and reliability risk when basing decisions on the validity of the token. Public clients like native or single-page applications don't benefit from validating tokens because the application communicates directly with the IDP where SSL protection ensures the tokens are valid.
-APIs and web applications must only validate tokens that have an `aud` claim that matches the application. Other resources may have custom token validation rules. For example, tokens for Microsoft Graph won't validate according to these rules due to their proprietary format. Validating and accepting tokens meant for another resource is an example of the [confused deputy](https://cwe.mitre.org/data/definitions/441.html) problem.
+APIs and web applications must only validate tokens that have an `aud` claim that matches the application. Other resources may have custom token validation rules. For example, you can't validate tokens for Microsoft Graph according to these rules due to their proprietary format. Validating and accepting tokens meant for another resource is an example of the [confused deputy](https://cwe.mitre.org/data/definitions/441.html) problem.
If the application needs to validate an ID token or an access token, it should first validate the signature of the token and the issuer against the values in the OpenID discovery document. For example, the tenant-independent version of the document is located at [https://login.microsoftonline.com/common/.well-known/openid-configuration](https://login.microsoftonline.com/common/.well-known/openid-configuration).
The Azure AD middleware has built-in capabilities for validating access tokens,
### Validating the signature
-A JWT contains three segments, which are separated by the `.` character. The first segment is known as the **header**, the second as the **body**, and the third as the **signature**. The signature segment can be used to validate the authenticity of the token so that it can be trusted by the application.
+A JWT contains three segments separated by the `.` character. The first segment is the **header**, the second is the **body**, and the third is the **signature**. Use the signature segment to evaluate the authenticity of the token.
-Tokens issued by Azure AD are signed using industry standard asymmetric encryption algorithms, such as RS256. The header of the JWT contains information about the key and encryption method used to sign the token:
+Azure AD issues tokens signed using the industry standard asymmetric encryption algorithms, such as RS256. The header of the JWT contains information about the key and encryption method used to sign the token:
```json {
Tokens issued by Azure AD are signed using industry standard asymmetric encrypti
} ```
-The `alg` claim indicates the algorithm that was used to sign the token, while the `kid` claim indicates the particular public key that was used to validate the token.
+The `alg` claim indicates the algorithm used to sign the token, while the `kid` claim indicates the particular public key that was used to validate the token.
-At any given point in time, Azure AD may sign an ID token using any one of a certain set of public-private key pairs. Azure AD rotates the possible set of keys on a periodic basis, so the application should be written to handle those key changes automatically. A reasonable frequency to check for updates to the public keys used by Azure AD is every 24 hours.
+At any given point in time, Azure AD may sign an ID token using any one of a certain set of public-private key pairs. Azure AD rotates the possible set of keys on a periodic basis, so write the application to handle those key changes automatically. A reasonable frequency to check for updates to the public keys used by Azure AD is every 24 hours.
Acquire the signing key data necessary to validate the signature by using the [OpenID Connect metadata document](v2-protocols-oidc.md#fetch-the-openid-configuration-document) located at:
https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration
The following information describes the metadata document: - Is a JSON object that contains several useful pieces of information, such as the location of the various endpoints required for doing OpenID Connect authentication.-- Includes a `jwks_uri`, which gives the location of the set of public keys that correspond to the private keys used to sign tokens. The JSON Web Key (JWK) located at the `jwks_uri` contains all of the public key information in use at that particular moment in time. The JWK format is described in [RFC 7517](https://tools.ietf.org/html/rfc7517). The application can use the `kid` claim in the JWT header to select the public key, from this document, which corresponds to the private key that has been used to sign a particular token. It can then do signature validation using the correct public key and the indicated algorithm.
+- Includes a `jwks_uri`, which gives the location of the set of public keys that correspond to the private keys used to sign tokens. The JSON Web Key (JWK) located at the `jwks_uri` contains all of the public key information in use at that particular moment in time. [RFC 7517](https://tools.ietf.org/html/rfc7517) describes the JWK format. The application can use the `kid` claim in the JWT header to select the public key, from this document, which corresponds to the private key that has been used to sign a particular token. It can then do signature validation using the correct public key and the indicated algorithm.
> [!NOTE] > Use the `kid` claim to validate the token. Though v1.0 tokens contain both the `x5t` and `kid` claims, v2.0 tokens contain only the `kid` claim. Doing signature validation is outside the scope of this document. There are many open-source libraries available for helping with signature validation if necessary. However, the Microsoft identity platform has one token signing extension to the standards, which are custom signing keys.
-If the application has custom signing keys as a result of using the [claims-mapping](active-directory-claims-mapping.md) feature, append an `appid` query parameter that contains the application ID to get a `jwks_uri` that points to the signing key information of the application, which should be used for validation. For example: `https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` contains a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`.
+If the application has custom signing keys as a result of using the [claims-mapping](active-directory-claims-mapping.md) feature, append an `appid` query parameter that contains the application ID. For validation, use `jwks_uri` that points to the signing key information of the application. For example: `https://login.microsoftonline.com/{tenant}/.well-known/openid-configuration?appid=6731de76-14a6-49ae-97bc-6eba6914391e` contains a `jwks_uri` of `https://login.microsoftonline.com/{tenant}/discovery/keys?appid=6731de76-14a6-49ae-97bc-6eba6914391e`.
### Claims based authorization
-The business logic of an application determines how authorization should be handled. The general approach to authorization based on token claims, and which claims should be used, is described below.
+The business logic of an application determines how authorization should be handled. The general approach to authorization based on token claims, and which claims should be used, is described in the following sections.
After a token is validated with the correct `aud` claim, the token tenant, subject, actor must be authorized.
First, always check that the `tid` in a token matches the tenant ID used to stor
#### Subject
-Next, to determine if the token subject, such as the user (or app itself in the case of an app-only token), is authorized, either check for specific `sub` or `oid` claims, or check that the subject belongs to an appropriate role or group with the `roles`, `groups`, `wids` claims.
+Next, to determine if the token subject, such as the user (or app itself for an app-only token), is authorized, either check for specific `sub` or `oid` claims, or check that the subject belongs to an appropriate role or group with the `roles`, `groups`, `wids` claims.
For example, use the immutable claim values `tid` and `oid` as a combined key for application data and determining whether a user should be granted access.
The `roles`, `groups` or `wids` claims can also be used to determine if the subj
Lastly, when an app is acting for a user, this client app (the actor), must also be authorized. Use the `scp` claim (scope) to validate that the app has permission to perform an operation.
-Scopes are defined by the application, and the absence of `scp` claim means full actor permissions.
+The application defines the scopes and the absence of the `scp` claim means full actor permissions.
> [!NOTE] > An application may handle app-only tokens (requests from applications without users, such as daemon apps) and want to authorize a specific application across multiple tenants, rather than individual service principal IDs. In that case, check for an app-only token using the `idtyp` optional claim and use the `appid` claim (for v1.0 tokens) or the `azp` claim (for v2.0 tokens) along with `tid` to determine authorization based on tenant and application ID.
Scopes are defined by the application, and the absence of `scp` claim means full
## Token revocation
-Refresh tokens can be invalidated or revoked at any time, for different reasons. The reasons fall into the categories of timeouts and revocations.
+Refresh tokens are invalidated or revoked at any time, for different reasons. The reasons fall into the categories of timeouts and revocations.
### Token timeouts
-When an organization uses [token lifetime configuration](active-directory-configurable-token-lifetimes.md), the lifetime of refresh tokens can be altered. It's expected that some tokens can go without use. For example, the user doesn't open the application for three months and then the token expires. Applications can encounter scenarios where the login server rejects a refresh token due to its age.
+Organizations can use [token lifetime configuration](active-directory-configurable-token-lifetimes.md) to alter the lifetime of refresh tokens Some tokens can go without use. For example, the user doesn't open the application for three months and then the token expires. Applications can encounter scenarios where the login server rejects a refresh token due to its age.
-- MaxInactiveTime: If the refresh token hasn't been used within the time dictated by the MaxInactiveTime, the refresh token is no longer valid.-- MaxSessionAge: If MaxAgeSessionMultiFactor or MaxAgeSessionSingleFactor have been set to something other than their default (Until-revoked), then reauthentication is required after the time set in the MaxAgeSession* elapses. Examples:
+- MaxInactiveTime: Specifies the amount of time that a token can be inactive.
+- MaxSessionAge: If MaxAgeSessionMultiFactor or MaxAgeSessionSingleFactor is set to something other than their default (Until-revoked), the user must reauthenticate after the time set in the MaxAgeSession*. Examples:
- The tenant has a MaxInactiveTime of five days, and the user went on vacation for a week, and so Azure AD hasn't seen a new token request from the user in seven days. The next time the user requests a new token, they'll find their refresh token has been revoked, and they must enter their credentials again.
- - A sensitive application has a MaxAgeSessionSingleFactor of one day. If a user logs in on Monday, and on Tuesday (after 25 hours have elapsed), they'll be required to reauthenticate.
+ - A sensitive application has a MaxAgeSessionSingleFactor of one day. If a user logs in on Monday, and on Tuesday (after 25 hours have elapsed), they must reauthenticate.
### Token revocations
-Refresh tokens can be revoked by the server due to a change in credentials, or due to use or administrative action. Refresh tokens are in the classes of confidential clients and public clients.
+The server possibly revokes refresh tokens due to a change in credentials, or due to use or administrative action. Refresh tokens are in the classes of confidential clients and public clients.
| Change | Password-based cookie | Password-based token | Non-password-based cookie | Non-password-based token | Confidential client token | ||--|-||--||
For more information, see [Primary Refresh Tokens](../devices/concept-primary-re
## Next steps -- Learn about [`id_tokens` in Azure AD](id-tokens.md).-- Learn about [permission and consent](permissions-consent-overview.md).
+- Learn more about the [security tokens used in Azure AD](security-tokens.md).
active-directory Custom Claims Provider Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-claims-provider-overview.md
Previously updated : 03/13/2023 Last updated : 03/31/2023
For an example using a custom claims provider with the **token issuance start**
- Learn how to [create and register a custom claims provider](custom-extension-get-started.md) with a sample Open ID Connect application. - If you already have a custom claims provider registered, you can configure a [SAML application](custom-extension-configure-saml-app.md) to receive tokens with claims sourced from an external store.
+- Learn more about custom claims providers with the [custom claims provider reference](custom-claims-provider-reference.md) article.
active-directory Custom Extension Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md
Previously updated : 03/13/2023 Last updated : 03/31/2023
The following screenshot demonstrates how to configure the Azure HTTP trigger fu
} ```
- The code starts with reading the incoming JSON object. Azure AD sends the JSON object to your API. In this example, it reads the correlation ID value. Then, the code returns a collection of claims, including the original correlation ID, the version of your Azure Function, date of birth and custom role that is returned to Azure AD.
+ The code starts with reading the incoming JSON object. Azure AD sends the [JSON object](./custom-claims-provider-reference.md) to your API. In this example, it reads the correlation ID value. Then, the code returns a collection of claims, including the original correlation ID, the version of your Azure Function, date of birth and custom role that is returned to Azure AD.
1. From the top menu, select **Get Function Url**, and copy the URL. In the next step, the function URL will be used and referred to as `{Function_Url}`.
active-directory Migrate Python Adal Msal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/migrate-python-adal-msal.md
Python Previously updated : 11/11/2019 Last updated : 03/30/2023
You can learn more about MSAL and get started with an [overview of the Microsoft
ADAL works with the Azure Active Directory (Azure AD) v1.0 endpoint. The Microsoft Authentication Library (MSAL) works with the Microsoft identity platform--formerly known as the Azure Active Directory v2.0 endpoint. The Microsoft identity platform differs from Azure AD v1.0 in that it: Supports:
- - Work and school accounts (Azure AD provisioned accounts)
- - Personal accounts (such as Outlook.com or Hotmail.com)
- - Your customers who bring their own email or social identity (such as LinkedIn, Facebook, Google) via the Azure AD B2C offering
+
+- Work and school accounts (Azure AD provisioned accounts)
+- Personal accounts (such as Outlook.com or Hotmail.com)
+- Your customers who bring their own email or social identity (such as LinkedIn, Facebook, Google) via the Azure AD B2C offering
- Is standards compatible with: - OAuth v2.0
For more information about MSAL, see [MSAL overview](./msal-overview.md).
### Scopes not resources
-ADAL Python acquires tokens for resources, but MSAL Python acquires tokens for scopes. The API surface in MSAL Python does not have resource parameter anymore. You would need to provide scopes as a list of strings that declare the desired permissions and resources that are requested. To see some example of scopes, see [Microsoft Graph's scopes](/graph/permissions-reference).
+ADAL Python acquires tokens for resources, but MSAL Python acquires tokens for scopes. The API surface in MSAL Python doesn't have resource parameter anymore. You would need to provide scopes as a list of strings that declare the desired permissions and resources that are requested. To see some example of scopes, see [Microsoft Graph's scopes](/graph/permissions-reference).
-You can add the `/.default` scope suffix to the resource to help migrate your apps from the v1.0 endpoint (ADAL) to the Microsoft identity platform (MSAL). For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource is not in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
+You can add the `/.default` scope suffix to the resource to help migrate your apps from the v1.0 endpoint (ADAL) to the Microsoft identity platform (MSAL). For example, for the resource value of `https://graph.microsoft.com`, the equivalent scope value is `https://graph.microsoft.com/.default`. If the resource isn't in the URL form, but a resource ID of the form `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX`, you can still use the scope value as `XXXXXXXX-XXXX-XXXX-XXXXXXXXXXXX/.default`.
For more details about the different types of scopes, refer to [Permissions and consent in the Microsoft identity platform](./v2-permissions-and-consent.md) and the [Scopes for a Web API accepting v1.0 tokens](./msal-v1-app-scopes.md) articles. ### Error handling
-Azure Active Directory Authentication Library (ADAL) for Python uses the exception `AdalError` to indicate that there's been a problem. MSAL for Python typically uses error codes, instead. For more information, see [MSAL for Python error handling](msal-error-handling-python.md).
+ADAL for Python uses the exception `AdalError` to indicate that there's been a problem. MSAL for Python typically uses error codes, instead. For more information, see [MSAL for Python error handling](msal-error-handling-python.md).
### API changes The following table lists an API in ADAL for Python, and the one to use in its place in MSAL for Python:
-| ADAL for Python API | MSAL for Python API |
-| - | - |
-| [AuthenticationContext](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext) | [PublicClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.__init__) or [ConfidentialClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.__init__) |
-| N/A | [PublicClientApplication.acquire_token_interactive()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_interactive) |
-| N/A | [ConfidentialClientApplication.initiate_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.initiate_auth_code_flow) |
-| [acquire_token_with_authorization_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_authorization_code) | [ConfidentialClientApplication.acquire_token_by_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_auth_code_flow) |
-| [acquire_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token) | [PublicClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_silent) or [ConfidentialClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_silent) |
-| [acquire_token_with_refresh_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_refresh_token) | These two helpers are intended to be used during [migration](#migrate-existing-refresh-tokens-for-msal-python) only: [PublicClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_refresh_token) or [ConfidentialClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_refresh_token) |
-| [acquire_user_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_user_code) | [initiate_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.initiate_device_flow) |
-| [acquire_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_device_code) and [cancel_request_to_get_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.cancel_request_to_get_token_with_device_code) | [acquire_token_by_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_device_flow) |
-| [acquire_token_with_username_password()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_username_password) | [acquire_token_by_username_password()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_username_password) |
-| [acquire_token_with_client_credentials()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_credentials) and [acquire_token_with_client_certificate()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_certificate) | [acquire_token_for_client()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client) |
-| N/A | [acquire_token_on_behalf_of()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) |
-| [TokenCache()](https://adal-python.readthedocs.io/en/latest/#adal.TokenCache) | [SerializableTokenCache()](https://msal-python.readthedocs.io/en/latest/#msal.SerializableTokenCache) |
-| N/A | Cache with persistence, available from [MSAL Extensions](https://github.com/marstr/original-microsoft-authentication-extensions-for-python) |
+| ADAL for Python API | MSAL for Python API |
+| -- | - |
+| [AuthenticationContext](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext) | [PublicClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.__init__) or [ConfidentialClientApplication](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.__init__) |
+| N/A | [PublicClientApplication.acquire_token_interactive()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_interactive) |
+| N/A | [ConfidentialClientApplication.initiate_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.initiate_auth_code_flow) |
+| [acquire_token_with_authorization_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_authorization_code) | [ConfidentialClientApplication.acquire_token_by_auth_code_flow()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_auth_code_flow) |
+| [acquire_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token) | [PublicClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_silent) or [ConfidentialClientApplication.acquire_token_silent()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_silent) |
+| [acquire_token_with_refresh_token()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_refresh_token) | These two helpers are intended to be used during [migration](#migrate-existing-refresh-tokens-for-msal-python) only: [PublicClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_refresh_token) or [ConfidentialClientApplication.acquire_token_by_refresh_token()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_by_refresh_token) |
+| [acquire_user_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_user_code) | [initiate_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.initiate_device_flow) |
+| [acquire_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_device_code) and [cancel_request_to_get_token_with_device_code()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.cancel_request_to_get_token_with_device_code) | [acquire_token_by_device_flow()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_device_flow) |
+| [acquire_token_with_username_password()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_username_password) | [acquire_token_by_username_password()](https://msal-python.readthedocs.io/en/latest/#msal.PublicClientApplication.acquire_token_by_username_password) |
+| [acquire_token_with_client_credentials()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_credentials) and [acquire_token_with_client_certificate()](https://adal-python.readthedocs.io/en/latest/#adal.AuthenticationContext.acquire_token_with_client_certificate) | [acquire_token_for_client()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client) |
+| N/A | [acquire_token_on_behalf_of()](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_on_behalf_of) |
+| [TokenCache()](https://adal-python.readthedocs.io/en/latest/#adal.TokenCache) | [SerializableTokenCache()](https://msal-python.readthedocs.io/en/latest/#msal.SerializableTokenCache) |
+| N/A | Cache with persistence, available from [MSAL Extensions](https://github.com/marstr/original-microsoft-authentication-extensions-for-python) |
## Migrate existing refresh tokens for MSAL Python
-The Microsoft Authentication Library (MSAL) abstracts the concept of refresh tokens. MSAL Python provides an in-memory token cache by default so that you don't need to store, lookup, or update refresh tokens. Users will also see fewer sign-in prompts because refresh tokens can usually be updated without user intervention. For more information about the token cache, see [Custom token cache serialization in MSAL for Python](msal-python-token-cache-serialization.md).
+MSAL abstracts the concept of refresh tokens. MSAL Python provides an in-memory token cache by default so that you don't need to store, lookup, or update refresh tokens. Users will also see fewer sign-in prompts because refresh tokens can usually be updated without user intervention. For more information about the token cache, see [Custom token cache serialization in MSAL for Python](msal-python-token-cache-serialization.md).
The following code will help you migrate your refresh tokens managed by another OAuth2 library (including but not limited to ADAL Python) to be managed by MSAL for Python. One reason for migrating those refresh tokens is to prevent existing users from needing to sign in again when you migrate your app to MSAL for Python.
active-directory Registration Config Change Token Lifetime How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-change-token-lifetime-how-to.md
This article shows how to use Azure AD PowerShell to set an access token lifetim
To set an access token lifetime policy, download the [Azure AD PowerShell Module](https://www.powershellgallery.com/packages/AzureADPreview). Run the **Connect-AzureAD -Confirm** command.
-HereΓÇÖs an example policy that requires users to authenticate more frequently in your web app. This policy sets the lifetime of the access to the service principal of your web app. Create the policy and assign it to your service principal. You also need to get the ObjectId of your service principal.
+HereΓÇÖs an example policy that requires users to authenticate less frequently in your web app. This policy sets the lifetime of the access to the service principal of your web app. Create the policy and assign it to your service principal. You also need to get the ObjectId of your service principal.
```powershell $policy = New-AzureADPolicy -Definition @('{"TokenLifetimePolicy":{"Version":1,"AccessTokenLifetime":"02:00:00"}}') -DisplayName "WebPolicyScenario" -IsOrganizationDefault $false -Type "TokenLifetimePolicy"
active-directory Scenario Protected Web Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-configuration.md
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApi(Configuration); services.Configure<JwtBearerOptions>(JwtBearerDefaults.AuthenticationScheme, options => {
- var existingOnTokenValidatedHandler = options.Events.OnTokenValidated;
- options.Events.OnTokenValidated = async context =>
- {
- await existingOnTokenValidatedHandler(context);
- // Your code to add extra configuration that will be executed after the current event implementation.
- options.TokenValidationParameters.ValidIssuers = new[] { /* list of valid issuers */ };
- options.TokenValidationParameters.ValidAudiences = new[] { /* list of valid audiences */};
- };
+ options.TokenValidationParameters.ValidAudiences = new[] { /* list of valid audiences */};
}); ```
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
As an administrator in Azure Active Directory, open PowerShell, run ``Connect-Az
```PowerShell Get-AzureADUserRegisteredDevice -ObjectId johndoe@contoso.com | Set-AzureADDevice -AccountEnabled $false ```+
+>[!NOTE]
+> For information on specific roles that can perform these steps review [Azure AD built-in roles](../roles/permissions-reference.md)
## When access is revoked Once admins have taken the above steps, the user can't gain new tokens for any application tied to Azure Active Directory. The elapsed time between revocation and the user losing their access depends on how the application is granting access:
Once admins have taken the above steps, the user can't gain new tokens for any a
- Use [Azure AD SaaS App Provisioning](../app-provisioning/user-provisioning.md). Azure AD SaaS App Provisioning typically runs automatically every 20-40 minutes. [Configure Azure AD provisioning](../saas-apps/tutorial-list.md) to deprovision or deactivate disabled users in applications.
- - For applications that don't use Azure AD SaaS App Provisioning, use [Identity Manager (MIM)](/microsoft-identity-manager/mim-how-provision-users-adds) or a 3rd party solution to automate the deprovisioning of users.
+ - For applications that don't use Azure AD SaaS App Provisioning, use [Identity Manager (MIM)](/microsoft-identity-manager/mim-how-provision-users-adds) or a third party solution to automate the deprovisioning of users.
- Identify and develop a process for applications that requires manual deprovisioning. Ensure admins can quickly run the required manual tasks to deprovision the user from these apps when needed. - [Manage your devices and applications with Microsoft Intune](/mem/intune/remote-actions/device-management). Intune-managed [devices can be reset to factory settings](/mem/intune/remote-actions/devices-wipe). If the device is unmanaged, you can [wipe the corporate data from managed apps](/mem/intune/apps/apps-selective-wipe). These processes are effective for removing potentially sensitive data from end users' devices. However, for either process to be triggered, the device must be connected to the internet. If the device is offline, the device will still have access to any locally stored data.
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 03/01/2023 Last updated : 03/31/2023
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## March 2023
+
+### Updated articles
+
+- [Invite internal users to B2B collaboration](invite-internal-users.md)
+- [Federation with SAML/WS-Fed identity providers for guest users](direct-federation.md)
+- [Add Azure Active Directory (Azure AD) as an identity provider for External Identities](azure-ad-account.md)
+- [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)
+- [Billing model for Azure AD External Identities](external-identities-pricing.md)
+- [Tutorial: Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md)
+ ## February 2023 ### Updated articles
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Add Facebook as an identity provider for External Identities](facebook-federation.md) - [Leave an organization as an external user](leave-the-organization.md) - [External Identities in Azure Active Directory](external-identities-overview.md)-- [External Identities documentation](index.yml)-
-## December 2022
-
-### Updated articles
--- [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)-- [Azure Active Directory B2B collaboration API and customization](customize-invitation-api.md)-- [Azure Active Directory External Identities: What's new](whats-new-docs.md)-- [Auditing and reporting a B2B collaboration user](auditing-and-reporting.md)
+- [External Identities documentation](index.yml)
active-directory Customize Workflow Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md
Emails sent out using Lifecycle workflows can be customized to have your own com
- A verified domain. To add a custom domain, see: [Managing custom domain names in your Azure Active Directory](../enterprise-users/domains-manage.md) - Custom Branding set within Azure AD if you want to have your custom branding used in emails. To set organizational branding within your Azure tenant, see: [Configure your company branding (preview)](../fundamentals/how-to-customize-branding.md).
+> [!NOTE]
+> The recommendation is to use a domain that has the appropriate DNS records to facilitate email validation, like SPF, DKIM, DMARC, and MX as this then complies with the [RFC compliance](https://www.ietf.org/rfc/rfc2142.txt) for sending and receiving email. Please see [Learn more about Exchange Online Email Routing](/exchange/mail-flow-best-practices/mail-flow-best-practices) for more information.
+ After these prerequisites are satisfied, you'd follow these steps: 1. On the Lifecycle workflows page, select **Workflow settings (Preview)**.
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
To make use of workload identity risk, including the new **Risky workload identi
- Security Administrator - Security Operator - Security Reader- Users assigned the Conditional Access administrator role can create policies that use risk as a condition. ## Workload identity risk detections
We detect risk on workload identities across sign-in behavior and offline indica
| | | | | Azure AD threat intelligence | Offline | This risk detection indicates some activity that is consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources. | | Suspicious Sign-ins | Offline | This risk detection indicates sign-in properties or patterns that are unusual for this service principal. <br><br> The detection learns the baselines sign-in behavior for workload identities in your tenant in between 2 and 60 days, and fires if one or more of the following unfamiliar properties appear during a later sign-in: IP address / ASN, target resource, user agent, hosting/non-hosting IP change, IP country, credential type. <br><br> Because of the programmatic nature of workload identity sign-ins, we provide a timestamp for the suspicious activity instead of flagging a specific sign-in event. <br><br> Sign-ins that are initiated after an authorized configuration change may trigger this detection. |
-| Admin confirmed account compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). |
+| Admin confirmed service principal compromised | Offline | This detection indicates an admin has selected 'Confirm compromised' in the Risky Workload Identities UI or using riskyServicePrincipals API. To see which admin has confirmed this account compromised, check the accountΓÇÖs risk history (via UI or API). |
| Leaked Credentials | Offline | This risk detection indicates that the account's valid credentials have been leaked. This leak can occur when someone checks in the credentials in public code artifact on GitHub, or when the credentials are leaked through a data breach. <br><br> When the Microsoft leaked credentials service acquires credentials from GitHub, the dark web, paste sites, or other sources, they're checked against current valid credentials in Azure AD to find valid matches. |
-| Malicious application | Offline | This detection indicates that Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application. Note: These applications will show `DisabledDueToViolationOfServicesAgreement` on the `disabledByMicrosoftStatus` property on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph. To prevent them from being instantiated in your organization again in the future, you cannot delete these objects. |
-| Suspicious application | Offline | This detection indicates that Microsoft has identified an application that may be violating our terms of service, but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
+| Malicious application | Offline | This detection combines alerts from Identity Protection and Microsoft Defender for Cloud Apps to indicate when Microsoft has disabled an application for violating our terms of service. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application. Note: These applications will show `DisabledDueToViolationOfServicesAgreement` on the `disabledByMicrosoftStatus` property on the related [application](/graph/api/resources/application) and [service principal](/graph/api/resources/serviceprincipal) resource types in Microsoft Graph. To prevent them from being instantiated in your organization again in the future, you cannot delete these objects. |
+| Suspicious application | Offline | This detection indicates that Identity Protection or Microsoft Defender for Cloud Apps have identified an application that may be violating our terms of service but hasn't disabled it. We recommend [conducting an investigation](https://go.microsoft.com/fwlink/?linkid=2208429) of the application.|
| Anomalous service principal activity | Offline | This risk detection baselines normal administrative service principal behavior in Azure AD, and spots anomalous patterns of behavior like suspicious changes to the directory. The detection is triggered against the administrative service principal making the change or the object that was changed. | ## Identify risky workload identities
active-directory Assign User Or Group Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md
$assignments | ForEach-Object {
1. Get the enterprise application. Filter by DisplayName. ```http
- GET servicePrincipal?$filter=DisplayName eq '{appDisplayName}'
+ GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=displayName eq '{appDisplayName}'
``` Record the following values from the response body:
$assignments | ForEach-Object {
1. Get the user by filtering by the user's principal name. Record the object ID of the user. ```http
- GET /users/{userPrincipalName}
+ GET https://graph.microsoft.com/v1.0/users/{userPrincipalName}
``` 1. Assign the user to the application. ```http
- POST /servicePrincipals/resource-servicePrincipal-id/appRoleAssignedTo
+ POST https://graph.microsoft.com/v1.0/servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo
{ "principalId": "33ad69f9-da99-4bed-acd0-3f24235cb296",
$assignments | ForEach-Object {
## Unassign users, and groups, from an application To unassign user and groups from the application, run the following query.
-1. Get the enterprise application. Filter by DisplayName.
+1. Get the enterprise application. Filter by displayName.
```http
- GET servicePrincipal?$filter=DisplayName eq '{appDisplayName}'
+ GET https://graph.microsoft.com/v1.0/servicePrincipals?$filter=displayName eq '{appDisplayName}'
``` 1. Get the list of appRoleAssignments for the application.
- ```http
- GET /servicePrincipals/{id}/appRoleAssignedTo
- ```
+ ```http
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{id}/appRoleAssignedTo
+ ```
1. Remove the appRoleAssignments by specifying the appRoleAssignment ID. ```http
- DELETE /servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
``` :::zone-end
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
To delete an enterprise application, you need:
Delete an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). 1. To get the list of service principals in your tenant, run the following query.
-
+ # [HTTP](#tab/http)
```http GET https://graph.microsoft.com/v1.0/servicePrincipals ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/list-serviceprincipal-csharp-snippets.md)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/list-serviceprincipal-javascript-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/list-serviceprincipal-go-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/list-serviceprincipal-powershell-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/list-serviceprincipal-php-snippets.md)]
+
+
+ 1. Record the ID of the enterprise app you want to delete. 1. Delete the enterprise application.-
+
+ # [HTTP](#tab/http)
```http DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id} ```
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/delete-serviceprincipal-csharp-snippets.md)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/delete-serviceprincipal-javascript-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/delete-serviceprincipal-go-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/delete-serviceprincipal-powershell-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/delete-serviceprincipal-php-snippets.md)]
+
+
:::zone-end ## Next steps - [Restore a deleted enterprise application](restore-application.md)-
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
You need to consent to the following permissions:
Run the following queries to review delegated permissions granted to an application.
-1. Get Service Principal using objectID
+1. Get service principal using the object ID.
```http
- GET /servicePrincipals/{id}
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{id}
``` Example: ```http
- GET /servicePrincipals/57443554-98f5-4435-9002-852986eea510
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/00063ffc-54e9-405d-b8f3-56124728e051
``` 1. Get all delegated permissions for the service principal ```http
- GET /servicePrincipals/{id}/oauth2PermissionGrants
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{id}/oauth2PermissionGrants
``` 1. Remove delegated permissions using oAuth2PermissionGrants ID. ```http
- DELETE /oAuth2PermissionGrants/{id}
+ DELETE https://graph.microsoft.com/v1.0/oAuth2PermissionGrants/{id}
``` ### Application permissions
Run the following queries to review application permissions granted to an applic
1. Get all application permissions for the service principal ```http
- GET /servicePrincipals/{servicePrincipal-id}/appRoleAssignments
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id}/appRoleAssignments
``` 1. Remove application permissions using appRoleAssignment ID ```http
- DELETE /servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{resource-servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
``` ## Invalidate the refresh tokens
Run the following queries to remove appRoleAssignments of users or groups to the
1. Get Service Principal using objectID. ```http
- GET /servicePrincipals/{id}
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{id}
``` Example: ```http
- GET /servicePrincipals/57443554-98f5-4435-9002-852986eea510
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/57443554-98f5-4435-9002-852986eea510
``` 1. Get Azure AD App role assignments using objectID of the Service Principal. ```http
- GET /servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo
+ GET https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo
``` 1. Revoke refresh token for users and groups assigned to the application using appRoleAssignment ID. ```http
- DELETE /servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id}/appRoleAssignedTo/{appRoleAssignment-id}
``` :::zone-end
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
To recover your enterprise application with its previous configurations, first d
Get-AzureADMSDeletedDirectoryObject -Id <id> ```
-Replace id with the object ID of the service principal that you want to restore.
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Replace id with the object ID of the service principal that you want to restore.
```powershell Get-MgDirectoryDeletedItem -DirectoryObjectId <id> ```
-Replace id with the object ID of the service principal that you want to restore.
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Alternatively, if you want to get the specific enterprise application that was d
Restore-AzureADMSDeletedDirectoryObject -Id <id> ```
-Replace id with the object ID of the service principal that you want to restore.
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Replace id with the object ID of the service principal that you want to restore.
Restore-MgDirectoryObject -DirectoryObjectId <id> ```
-Replace id with the object ID of the service principal that you want to restore.
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Replace id with the object ID of the service principal that you want to restore.
1. To restore the enterprise application, run the following query:
+ # [HTTP](#tab/http)
```http POST https://graph.microsoft.com/v1.0/directory/deletedItems/{id}/restore ```
-Replace id with the object ID of the service principal that you want to restore.
+ # [C#](#tab/csharp)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/restore-directory-deleteditem-csharp-snippets.md)]
+
+ # [JavaScript](#tab/javascript)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/restore-directory-deleteditem-javascript-snippets.md)]
+
+ # [Java](#tab/java)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)]
+
+ # [Go](#tab/go)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/restore-directory-deleteditem-go-snippets.md)]
+
+ # [PowerShell](#tab/powershell)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/restore-directory-deleteditem-powershell-snippets.md)]
+
+ # [PHP](#tab/php)
+ [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/restore-directory-deleteditem-php-snippets.md)]
+
+
+
+Replace ID with the object ID of the service principal that you want to restore.
:::zone-end
Remove-AzureADMSDeletedDirectoryObject -Id <id>
To permanently delete a soft deleted enterprise application, run the following query in Microsoft Graph explorer
+# [HTTP](#tab/http)
```http DELETE https://graph.microsoft.com/v1.0/directory/deletedItems/{object-id} ```
+# [C#](#tab/csharp)
+
+# [JavaScript](#tab/javascript)
+
+# [Java](#tab/java)
+
+# [Go](#tab/go)
+
+# [PowerShell](#tab/powershell)
+
+# [PHP](#tab/php)
++++ :::zone-end ## Next steps
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Previously updated : 06/24/2022 Last updated : 03/31/2023 ms.tool: azure-cli, azure-powershell
In this article, we set up a virtual machine to use managed identities to connec
## Create a resource group
-Create a resource group called **mi-test**. We'll use this resource group for all resources used in this tutorial.
+Create a resource group called **mi-test**. We use this resource group for all resources used in this tutorial.
- [Create a resource group using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) - [Create a resource group using the CLI](../../azure-resource-manager/management/manage-resource-groups-cli.md#create-resource-groups)
az vm create --resource-group <MyResourceGroup> --name <myVM> --image UbuntuLTS
# [Resource Manager Template](#tab/azure-resource-manager)
-Depending on your API version, you have to take [different steps](qs-configure-template-windows-vm.md#user-assigned-managed-identity). If your apiVersion is 2018-06-01, your user-assigned managed identities are stored in the userAssignedIdentities dictionary format and the ```<identityName>``` value is the name of a variable that you define in the variables section of your template. In the variable, you point to the user assigned managed identity that you want to assign.
+Depending on your API version, you have to take [different steps](qs-configure-template-windows-vm.md#user-assigned-managed-identity). If your apiVersion is 2018-06-01, your user-assigned managed identities are stored in the userAssignedIdentities dictionary format. The ```<identityName>``` value is the name of a variable that you define in the variables section of your template. In the variable, you point to the user assigned managed identity that you want to assign.
```json "variables": {
To use the sample below, you need to have the following NuGet packages:
- Microsoft.Azure.Cosmos - Microsoft.Azure.Management.CosmosDB
-In addition to the NuGet packages above, you also need to enable **Include prerelease** and then add **Azure.ResourceManager.CosmosDB**.
+In addition to the NuGet packages above, you also need to enable **Include prerelease** and then add **Azure.ResourceManager.CosmosDB**.
```csharp using Azure.Identity;
namespace MITest
{ static async Task Main(string[] args) {
+ // Replace the placeholders with your own values
var subscriptionId = "Your subscription ID"; var resourceGroupName = "You resource group"; var accountName = "Cosmos DB Account name"; var databaseName = "mi-test"; var containerName = "container01";
+ // Authenticate to Azure using Managed Identity (system-assigned or user-assigned)
var tokenCredential = new DefaultAzureCredential();
- // create the management clientSS
- var managementClient = new CosmosDBManagementClient(subscriptionId, tokenCredential);
+ // Create the Cosmos DB management client using the subscription ID and token credential
+ var managementClient = new CosmosDBManagementClient(tokenCredential)
+ {
+ SubscriptionId = subscriptionId
+ };
- // create the data client
- var dataClient = new CosmosClient("https://[Account].documents.azure.com:443/", tokenCredential);
+ // Create the Cosmos DB data client using the account URL and token credential
+ var dataClient = new CosmosClient($"https://{accountName}.documents.azure.com:443/", tokenCredential);
- // create a new database
- var createDatabaseOperation = await managementClient.SqlResources.StartCreateUpdateSqlDatabaseAsync(resourceGroupName, accountName, databaseName,
+ // Create a new database using the management client
+ var createDatabaseOperation = await managementClient.SqlResources.StartCreateUpdateSqlDatabaseAsync(
+ resourceGroupName,
+ accountName,
+ databaseName,
new SqlDatabaseCreateUpdateParameters(new SqlDatabaseResource(databaseName), new CreateUpdateOptions())); await createDatabaseOperation.WaitForCompletionAsync();
- // create a new container
- var createContainerOperation = await managementClient.SqlResources.StartCreateUpdateSqlContainerAsync(resourceGroupName, accountName, databaseName, containerName,
+ // Create a new container using the management client
+ var createContainerOperation = await managementClient.SqlResources.StartCreateUpdateSqlContainerAsync(
+ resourceGroupName,
+ accountName,
+ databaseName,
+ containerName,
new SqlContainerCreateUpdateParameters(new SqlContainerResource(containerName), new CreateUpdateOptions())); await createContainerOperation.WaitForCompletionAsync(); -
- // create a new item
+ // Create a new item in the container using the data client
var partitionKey = "pkey"; var id = Guid.NewGuid().ToString(); await dataClient.GetContainer(databaseName, containerName) .CreateItemAsync(new { id = id, _partitionKey = partitionKey }, new PartitionKey(partitionKey)); -
- // read back the item
+ // Read back the item from the container using the data client
var pointReadResult = await dataClient.GetContainer(databaseName, containerName) .ReadItemAsync<dynamic>(id, new PartitionKey(partitionKey)); -
- // run a query
+ // Run a query to get all items from the container using the data client
await dataClient.GetContainer(databaseName, containerName) .GetItemQueryIterator<dynamic>("SELECT * FROM c") .ReadNextAsync();
active-directory Cross Tenant Synchronization Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-configure.md
Previously updated : 03/08/2023 Last updated : 03/31/2023
Currently, there isn't a way to delete a configuration on the **Configurations**
:::image type="content" source="./media/cross-tenant-synchronization-configure/enterprise-applications-configuration-delete.png" alt-text="Screenshot of the Enterprise applications Properties page showing how to delete a configuration." lightbox="./media/cross-tenant-synchronization-configure/enterprise-applications-configuration-delete.png":::
+#### Symptom - Users are skipped because SMS sign-in is enabled on the user
+Users are skipped from synchronization. The scoping step includes the following filter with status false: "Filter external users.alternativeSecurityIds EQUALS 'None'"
+
+**Cause**
+
+If SMS sign-in is enabled for a user, they will be skipped by the provisioning service.
+
+**Solution**
+
+Disable SMS Sign-in for the users. The script below shows how you can disable SMS Sign-in using PowerShell.
+
+```
+##### Disable SMS Sign-in options for the users
+
+#### Import module
+Install-Module Microsoft.Graph.Users.Actions
+Install-Module Microsoft.Graph.Identity.SignIns
+Import-Module Microsoft.Graph.Users.Actions
+
+Connect-MgGraph -Scopes "User.Read.All", "Group.ReadWrite.All", "UserAuthenticationMethod.Read.All","UserAuthenticationMethod.ReadWrite","UserAuthenticationMethod.ReadWrite.All"
++
+##### The value for phoneAuthenticationMethodId is 3179e48a-750b-4051-897c-87b9720928f7
+
+$phoneAuthenticationMethodId = "3179e48a-750b-4051-897c-87b9720928f7"
+
+#### Get the User Details
+
+$userId = "objectid_of_the_user_in_Azure_AD"
+
+#### validate the value for SmsSignInState
+
+$smssignin = Get-MgUserAuthenticationPhoneMethod -UserId $userId
+
+{
+ if($smssignin.SmsSignInState -eq "ready"){
+ #### Disable Sms Sign-In for the user is set to ready
+
+ Disable-MgUserAuthenticationPhoneMethodSmSign -UserId $userId -PhoneAuthenticationMethodId $phoneAuthenticationMethodId
+ Write-Host "SMS sign-in disabled for the user" -ForegroundColor Green
+ }
+ else{
+ Write-Host "SMS sign-in status not set or found for the user " -ForegroundColor Yellow
+ }
+
+}
+++
+##### End the script
+```
++ ## Next steps - [Tutorial: Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Previously updated : 03/24/2023 Last updated : 03/31/2023
Use the following table to better understand how to resolve errors that you find
|SystemForCrossDomainIdentity<br>ManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).| |SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Ensure that you either map a single-valued attribute to the property that is throwing an error, update the value in the source to be single-valued, or remove the attribute from the mappings.| + ## Error codes for cross-tenant synchronization Use the following table to better understand how to resolve errors that you find in the provisioning logs for [cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md). For any error codes that are missing, provide feedback by using the link at the bottom of this page.
Use the following table to better understand how to resolve errors that you find
> | AzureDirectoryB2BManagementPolicyCheckFailure | The cross-tenant synchronization policy allowing automatic redemption failed.<br/><br/>The synchronization engine checks to ensure that the administrator of the target tenant has created an inbound cross-tenant synchronization policy allowing automatic redemption. The synchronization engine also checks if the administrator of the source tenant has enabled an outbound policy for automatic redemption. | Ensure that the automatic redemption setting has been enabled for both the source and target tenants. For more information, see [Automatic redemption setting](../multi-tenant-organizations/cross-tenant-synchronization-overview.md#automatic-redemption-setting). | > | AzureActiveDirectoryQuotaLimitExceeded | The number of objects in the tenant exceeds the directory limit.<br/><br/>Azure AD has limits for the number of objects that can be created in a tenant. | Check whether the quota can be increased. For information about the directory limits and steps to increase the quota, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). | > |InvitationCreationFailure| The Azure AD provisioning service attempted to invite the user in the target tenant. That invitation failed.| Navigate to the user settings page in Azure AD > external users > collaboration restrictions and ensure that collaboration with that tenant is enabled.|
-> |AzureActiveDirectoryInsufficientRights|When a B2B user in the target tenant has a role other than User, Helpdesk Admin, or User Account Admin, they cannot be deleted.| Please remove the role(s) on the user in the target tenant in order to successfully delete the user in the target tenant.|
+> |AzureActiveDirectoryInsufficientRights|When a B2B user in the target tenant has a role other than User, Helpdesk Admin, or User Account Admin, they cannot be deleted.| Remove the role(s) on the user in the target tenant in order to successfully delete the user in the target tenant.|
+> |AzureActiveDirectoryForbidden|External collaboration settings have blocked invitations.|Navigate to user settings and ensure that [external collaboration settings](../external-identities/external-collaboration-settings-configure.md) are permitted.|
+> |InvitationCreationFailureInvalidPropertyValue|Potential causes:<br/>* The Primary SMTP Address is an invalid value.<br/>* UserType is neither guest nor member<br/>* Group email Address is not supported | Potential solutions:<br/>* The Primary SMTP Address has an invalid value. Resolving this issue will likely require updating the mail property of the source user. For more information, see [Prepare for directory synchronization to Microsoft 365](https://aka.ms/DirectoryAttributeValidations)<br/>* Ensure that the userType property is provisioned as type guest or member. This can be fixed by checking your attribute mappings to understand how the userType attribute is mapped.<br/>* The email address address of the user matches with the email address of a group in the tenant. Update the email address for one of the two objects.|
+> |InvitationCreationFailureAmbiguousUser| The invited user has a proxy address that matches an internal user in the target tenant. The proxy address must be unique. | To resolve this error, delete the existing internal user in the target tenant or remove this user from sync scope.|
## Next steps
active-directory Easy Metrics Auth0 Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/easy-metrics-auth0-connector-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Easy Metrics Auth0 Connector
+description: Learn how to configure single sign-on between Azure Active Directory and Easy Metrics Auth0 Connector.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with Easy Metrics Auth0 Connector
+
+In this article, you learn how to integrate Easy Metrics Auth0 Connector with Azure Active Directory (Azure AD). This application is a bridge between Azure AD and Auth0, federating Authentication to Microsoft Azure AD for our customers. When you integrate Easy Metrics Auth0 Connector with Azure AD, you can:
+
+* Control in Azure AD who has access to Easy Metrics Auth0 Connector.
+* Enable your users to be automatically signed-in to Easy Metrics Auth0 Connector with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Easy Metrics Auth0 Connector in a test environment. Easy Metrics Auth0 Connector supports only **SP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Easy Metrics Auth0 Connector, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Easy Metrics Auth0 Connector single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Easy Metrics Auth0 Connector application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Easy Metrics Auth0 Connector from the Azure AD gallery
+
+Add Easy Metrics Auth0 Connector from the Azure AD application gallery to configure single sign-on with Easy Metrics Auth0 Connector. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Easy Metrics Auth0 Connector** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the value:
+ `urn:auth0:easymetrics:ups-saml-sso`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://easymetrics.auth0.com/login/callback?connection=ups-saml-sso&organization=org_T8ro1Kth3Gleygg5`
+
+ c. In the **Sign on URL** textbox, type the URL:
+ `https://azureapp.gcp-easymetrics.com`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificate-base64-download.png "Certificate")
+
+## Configure Easy Metrics Auth0 Connector SSO
+
+To configure single sign-on on **Easy Metrics Auth0 Connector** side, you need to send the **Certificate (PEM)** to [Easy Metrics Auth0 Connector support team](mailto:support@easymetrics.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Easy Metrics Auth0 Connector test user
+
+In this section, you create a user called Britta Simon in Easy Metrics Auth0 Connector. Work with [Easy Metrics Auth0 Connector support team](mailto:support@easymetrics.com) to add the users in the Easy Metrics Auth0 Connector platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Easy Metrics Auth0 Connector Sign-on URL where you can initiate the login flow.
+
+* Go to Easy Metrics Auth0 Connector Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Easy Metrics Auth0 Connector tile in the My Apps, this will redirect to Easy Metrics Auth0 Connector Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Easy Metrics Auth0 Connector you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Mymobilityhq Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mymobilityhq-tutorial.md
+
+ Title: Azure Active Directory SSO integration with myMobilityHQ
+description: Learn how to configure single sign-on between Azure Active Directory and myMobilityHQ.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with myMobilityHQ
+
+In this article, you learn how to integrate myMobilityHQ with Azure Active Directory (Azure AD). myMobilityHQ is the secure portal that allows your company mobility managers to see a real-time dashboard of the status of their expatriate tax program. When you integrate myMobilityHQ with Azure AD, you can:
+
+* Control in Azure AD who has access to myMobilityHQ.
+* Enable your users to be automatically signed-in to myMobilityHQ with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for myMobilityHQ in a test environment. myMobilityHQ supports only **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with myMobilityHQ, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* myMobilityHQ single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the myMobilityHQ application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add myMobilityHQ from the Azure AD gallery
+
+Add myMobilityHQ from the Azure AD application gallery to configure single sign-on with myMobilityHQ. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **myMobilityHQ** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using one of the following patterns:
+
+ | **Identifier** |
+ ||
+ | `urn:auth0:prod:s<COMPANYNAME>` |
+ | `urn:auth0:stage:s<COMPANYNAME>` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://stage.vialto.auth0app.com/login/callback?connection=s<COMPANYNAME>` |
+ | `https://prod.vialto.auth0app.com/login/callback?connection=s<COMPANYNAME>` |
+ | `https://auth-stage.vialto.com/login/callback?connection=s<COMPANYNAME>` |
+ | `https://auth.vialto.com/login/callback?connection=s<COMPANYNAME>` |
+
+ c. In the **Sign on URL** textbox, type one of the following URLs:
+
+ | **Sign on URL** |
+ |-|
+ | `https://mymobilityhq-stage.vialto.com`|
+ | `https://mymobilityhq.vialto.com` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [myMobilityHQ support team](mailto:gbl_vialto_iam_engineering_support@vialto.com) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure myMobilityHQ SSO
+
+To configure single sign-on on **myMobilityHQ** side, you need to send the **App Federation Metadata Url** to [myMobilityHQ support team](mailto:gbl_vialto_iam_engineering_support@vialto.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create myMobilityHQ test user
+
+In this section, you create a user called Britta Simon in myMobilityHQ. Work with [myMobilityHQ support team](mailto:gbl_vialto_iam_engineering_support@vialto.com) to add the users in the myMobilityHQ platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to myMobilityHQ Sign-on URL where you can initiate the login flow.
+
+* Go to myMobilityHQ Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the myMobilityHQ tile in the My Apps, this will redirect to myMobilityHQ Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure myMobilityHQ you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Proofpoint Security Awareness Training Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/proofpoint-security-awareness-training-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Proofpoint Security Awareness Training
+description: Learn how to configure single sign-on between Azure Active Directory and Proofpoint Security Awareness Training.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with Proofpoint Security Awareness Training
+
+In this article, you learn how to integrate Proofpoint Security Awareness Training with Azure Active Directory (Azure AD). This application allows Azure AD to act as SAML IdP for authenticating users to Proofpoint Security Awareness Training. When you integrate Proofpoint Security Awareness Training with Azure AD, you can:
+
+* Control in Azure AD who has access to Proofpoint Security Awareness Training.
+* Enable your users to be automatically signed-in to Proofpoint Security Awareness Training with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Proofpoint Security Awareness Training in a test environment. Proofpoint Security Awareness Training supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Proofpoint Security Awareness Training, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Proofpoint Security Awareness Training single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Proofpoint Security Awareness Training application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Proofpoint Security Awareness Training from the Azure AD gallery
+
+Add Proofpoint Security Awareness Training from the Azure AD application gallery to configure single sign-on with Proofpoint Security Awareness Training. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Proofpoint Security Awareness Training** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ [ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")](common/edit-urls.png#lightbox)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>/api/auth/saml/metadata`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>/api/auth/saml/SSO`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following steps:
+
+ a. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>`
+
+ b. In the **Relay State** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>`
+
+ c. In the **Logout Url** textbox, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.<ENVIRONMENT>/api/auth/saml/SingleLogout`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL, Sign on URL, Relay State and Logout Url. Contact [Proofpoint Security Awareness Training Client support team](mailto:wst-support@proofpoint.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ [ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")](common/copy-metadataurl.png#lightbox)
+
+## Configure Proofpoint Security Awareness Training SSO
+
+To configure single sign-on on **Proofpoint Security Awareness Training** side, you need to send the **App Federation Metadata Url** to [Proofpoint Security Awareness Training support team](mailto:wst-support@proofpoint.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Proofpoint Security Awareness Training test user
+
+In this section, a user called B.Simon is created in Proofpoint Security Awareness Training. Proofpoint Security Awareness Training supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Proofpoint Security Awareness Training, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Proofpoint Security Awareness Training Sign-on URL where you can initiate the login flow.
+
+* Go to Proofpoint Security Awareness Training Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Proofpoint Security Awareness Training for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Proofpoint Security Awareness Training tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Proofpoint Security Awareness Training for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Proofpoint Security Awareness Training you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Seattletimessso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/seattletimessso-tutorial.md
+
+ Title: Azure Active Directory SSO integration with SeattleTimesSSO
+description: Learn how to configure single sign-on between Azure Active Directory and SeattleTimesSSO.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with SeattleTimesSSO
+
+In this article, you learn how to integrate SeattleTimesSSO with Azure Active Directory (Azure AD). This is the Institutional Subscription SSO for The Seattle Times. When you integrate SeattleTimesSSO with Azure AD, you can:
+
+* Control in Azure AD who has access to SeattleTimesSSO.
+* Enable your users to be automatically signed-in to SeattleTimesSSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for SeattleTimesSSO in a test environment. SeattleTimesSSO supports **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with SeattleTimesSSO, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* SeattleTimesSSO single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the SeattleTimesSSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add SeattleTimesSSO from the Azure AD gallery
+
+Add SeattleTimesSSO from the Azure AD application gallery to configure single sign-on with SeattleTimesSSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **SeattleTimesSSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate")
+
+1. On the **Set up SeattleTimesSSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure SeattleTimesSSO SSO
+
+To configure single sign-on on **SeattleTimesSSO** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [SeattleTimesSSO support team](mailto:it-hostingadmin@seattletimes.com). They set this setting to have the SAML SSO connection set properly on both sides
+
+### Create SeattleTimesSSO test user
+
+In this section, you create a user called Britta Simon in SeattleTimesSSO. Work with [SeattleTimesSSO support team](mailto:it-hostingadmin@seattletimes.com) to add the users in the SeattleTimesSSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the SeattleTimesSSO for which you set up the SSO.
+
+* You can use Microsoft My Apps. When you click the SeattleTimesSSO tile in the My Apps, you should be automatically signed in to the SeattleTimesSSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure SeattleTimesSSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Vera Suite Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vera-suite-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Vera Suite
+description: Learn how to configure single sign-on between Azure Active Directory and Vera Suite.
++++++++ Last updated : 03/31/2023++++
+# Azure Active Directory SSO integration with Vera Suite
+
+In this article, you learn how to integrate Vera Suite with Azure Active Directory (Azure AD). Vera Suite helps auto dealers maintain cultures of safety, streamline operations and manage risk. Vera Suite offers dealership workforce and workplace compliance solutions for EHS, HR and F&I managers. When you integrate Vera Suite with Azure AD, you can:
+
+* Control in Azure AD who has access to Vera Suite.
+* Enable your users to be automatically signed-in to Vera Suite with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You configure and test Azure AD single sign-on for Vera Suite in a test environment. Vera Suite supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Vera Suite, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Vera Suite single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Vera Suite application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Vera Suite from the Azure AD gallery
+
+Add Vera Suite from the Azure AD application gallery to configure single sign-on with Vera Suite. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Vera Suite** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type the URL:
+ `https://logon.mykpa.com/identity/Saml2/`
+
+ b. In the **Reply URL** textbox, type the URL:
+ `https://logon.mykpa.com/identity/Saml2/Acs`
+
+ c. In the **Sign on URL** textbox, type one of the following URLs:
+
+ | **Sign on URL** |
+ |-|
+ | `https://www.verasuite.com` |
+ | `https://logon.mykpa.com` |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+## Configure Vera Suite SSO
+
+To configure single sign-on on **Vera Suite** side, you need to send the **App Federation Metadata Url** to [Vera Suite support team](mailto:support@kpa.io). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Vera Suite test user
+
+In this section, a user called B.Simon is created in Vera Suite. Vera Suite supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Vera Suite, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Vera Suite Sign-on URL where you can initiate the login flow.
+
+* Go to Vera Suite Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Vera Suite tile in the My Apps, this will redirect to Vera Suite Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Vera Suite you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS) description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster Previously updated : 01/19/2023 Last updated : 03/30/2023
The CSI storage driver support on AKS allows you to natively use:
- [**Azure Blob storage**](azure-blob-csi.md) can be used to mount Blob storage (or object storage) as a file system into a container or pod. Using Blob storage enables your cluster to support applications that work with large unstructured datasets like log file data, images or documents, HPC, and others. Additionally, if you ingest data into [Azure Data Lake storage](../storage/blobs/data-lake-storage-introduction.md), you can directly mount and use it in AKS without configuring another interim filesystem. > [!IMPORTANT]
-> Starting with Kubernetes version 1.26, in-tree persistent volume types *kubernetes.io/azure-disk* and *kubernetes.io/azure-file* are deprecated and will no longer be supported. Removing these drivers following their deprecation is not planned, however you should migrate to the corresponding CSI drivers *disks.csi.azure.com* and *file.csi.azure.com*. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-to-csi-drivers].
+> Starting with Kubernetes version 1.26, in-tree persistent volume types *kubernetes.io/azure-disk* and *kubernetes.io/azure-file* are deprecated and will no longer be supported. Removing these drivers following their deprecation is not planned, however you should migrate to the corresponding CSI drivers *disks.csi.azure.com* and *file.csi.azure.com*. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-csi-drivers].
> > *In-tree drivers* refers to the storage drivers that are part of the core Kubernetes code opposed to the CSI drivers, which are plug-ins. > [!NOTE]
+> It is recommended to delete the corresponding PersistentVolumeClaim object instead of the PersistentVolume object when deleting a CSI volume. The external provisioner in the CSI driver will react to the deletion of the PersistentVolumeClaim and based on its reclamation policy, it will issue the DeleteVolume call against the CSI volume driver commands to delete the volume. The PersistentVolume object will then be deleted.
+>
> Azure Disks CSI driver v2 (preview) improves scalability and reduces pod failover latency. It uses shared disks to provision attachment replicas on multiple cluster nodes and integrates with the pod scheduler to ensure a node with an attachment replica is chosen on pod failover. Azure Disks CSI driver v2 (preview) also provides the ability to fine tune performance. If you're interested in participating in the preview, submit a request: [https://aka.ms/DiskCSIv2Preview](https://aka.ms/DiskCSIv2Preview). This preview version is provided without a service level agreement, and you can occasionally expect breaking changes while in preview. The preview version isn't recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
spec:
runtimeClassName: wasmtime-slight-v1 containers: - name: hello-slight
- image: ghcr.io/deislabs/containerd-wasm-shims/examples/slight-rust-hello:latest
+ image: ghcr.io/deislabs/containerd-wasm-shims/examples/slight-rust-hello:v0.3.3
command: ["/"] resources: requests:
api-management Api Management Debug Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-debug-policies.md
This article describes how to debug API Management policies using the [Azure API
* This feature is only available in the **Developer** tier of API Management. Each API Management instance supports only one concurrent debugging session.
-* This feature uses the built-in (service-level) all-access subscription for debugging. The [**Allow tracing**](api-management-howto-api-inspector.md#verify-allow-tracing-setting) setting must be enabled in this subscription.
+* This feature uses the built-in (service-level) all-access subscription (display name "Built-in all-access subscription") for debugging. The [**Allow tracing**](api-management-howto-api-inspector.md#verify-allow-tracing-setting) setting must be enabled in this subscription.
[!INCLUDE [api-management-tracing-alert](../../includes/api-management-tracing-alert.md)]
api-management Api Management Howto Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-autoscale.md
Previously updated : 02/02/2022 Last updated : 03/30/2023 + # Automatically scale an Azure API Management instance
-Azure API Management service instance can scale automatically based on a set of rules. This behavior can be enabled and configured through [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale) and is supported only in **Standard** and **Premium** tiers of the Azure API Management service.
+An Azure API Management service instance can scale automatically based on a set of rules. This behavior can be enabled and configured through [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale) and is currently supported only in the **Standard** and **Premium** tiers of the Azure API Management service.
The article walks through the process of configuring autoscale and suggests optimal configuration of autoscale rules. > [!NOTE]
-> API Management service in the **Consumption** tier scales automatically based on the traffic - without any additional configuration needed.
+> * In service tiers that support multiple scale units, you can also [manually scale](upgrade-and-scale.md) your API Management instance.
+> * An API Management service in the **Consumption** tier scales automatically based on the traffic - without any additional configuration needed.
## Prerequisites
To follow the steps from this article, you must:
+ Have an active Azure subscription. + Have an Azure API Management instance. For more information, see [Create an Azure API Management instance](get-started-create-service-instance.md).
-+ Understand the concept of [Capacity of an Azure API Management instance](api-management-capacity.md).
-+ Understand [manual scaling process of an Azure API Management instance](upgrade-and-scale.md), including cost consequences.
++ Understand the concept of [capacity](api-management-capacity.md) of an API Management instance.++ Understand [manual scaling](upgrade-and-scale.md) of an API Management instance, including cost consequences. [!INCLUDE [premium-standard.md](../../includes/api-management-availability-premium-standard.md)]
To follow the steps from this article, you must:
Certain limitations and consequences of scaling decisions need to be considered before configuring autoscale behavior.
-+ The pricing tier of your API Management instance determines the [maximum number of units](upgrade-and-scale.md#upgrade-and-scale) you may scale to. The **Standard tier** can be scaled to 4 units. You can add any number of units to the **Premium** tier.
-+ The scaling process will take at least 20 minutes.
++ The pricing tier of your API Management instance determines the [maximum number of units](upgrade-and-scale.md#upgrade-and-scale) you may scale to. For example, the **Standard tier** can be scaled to 4 units. You can add any number of units to the **Premium** tier.++ The scaling process takes at least 20 minutes. + If the service is locked by another operation, the scaling request will fail and retry automatically. + If your service instance is deployed in multiple regions (locations), only units in the **Primary location** can be autoscaled with Azure Monitor autoscale. Units in other locations can only be scaled manually. + If your service instance is configured with [availability zones](zone-redundancy.md) in the **Primary location**, be aware of the number of zones when configuring autoscaling. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones.
-## Enable and configure autoscale for Azure API Management service
+## Enable and configure autoscale for an API Management instance
-Follow the steps below to configure autoscale for an Azure API Management service:
+Follow these steps to configure autoscale for an Azure API Management service:
-1. Navigate to **Monitor** instance in the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com), and navigate to your API Management instance.
+1. In the left menu, select **Scale out (auto-scale)**, and then select **Custom autoscale**.
- ![Azure Monitor](media/api-management-howto-autoscale/01.png)
+ :::image type="content" source="media/api-management-howto-autoscale/01.png" alt-text="Screenshot of scale-out options in the portal.":::
-2. Select **Autoscale** from the menu on the left.
+1. In the **Default** scale condition, select **Scale based on a metric**, and then select **Add a rule**.
- ![Azure Monitor autoscale resource](media/api-management-howto-autoscale/02.png)
+ :::image type="content" source="media/api-management-howto-autoscale/04.png" alt-text="Screenshot of configuring the default scale condition in the portal.":::
-3. Locate your Azure API Management service based on the filters in dropdown menus.
-4. Select the desired Azure API Management service instance.
-5. In the newly opened section, click the **Enable autoscale** button.
+1. Define a new scale-out rule.
- ![Azure Monitor autoscale enable](media/api-management-howto-autoscale/03.png)
-
-6. In the **Rules** section, click **+ Add a rule**.
-
- ![Azure Monitor autoscale add rule](media/api-management-howto-autoscale/04.png)
-
-7. Define a new scale out rule.
-
- For example, a scale out rule could trigger an addition of an Azure API Management unit, when the average capacity metric over the last 30 minutes exceeds 80%. The table below provides configuration for such a rule.
+ For example, a scale-out rule could trigger addition of 1 API Management unit, when the average capacity metric over the previous 30 minutes exceeds 80%. The following table provides configuration for such a rule.
| Parameter | Value | Notes | |--|-||
- | Metric source | Current resource | Define the rule based on the current Azure API Management resource metrics. |
+ | Metric source | Current resource | Define the rule based on the current API Management resource metrics. |
| *Criteria* | | |
- | Time aggregation | Average | |
- | Metric name | Capacity | Capacity metric is an Azure API Management metric reflecting usage of resources of an Azure API Management instance. |
- | Time grain statistic | Average | |
+ | Metric name | Capacity | Capacity metric is an API Management metric reflecting usage of resources by an Azure API Management instance. |
+ | Location | Select the primary location of the API Management instance | |
| Operator | Greater than | |
- | Threshold | 80% | The threshold for the averaged capacity metric. |
- | Duration (in minutes) | 30 | The timespan to average the capacity metric over is specific to usage patterns. The longer the time period is, the smoother the reaction will be - intermittent spikes will have less effect on the scale-out decision. However, it will also delay the scale-out trigger. |
- | *Action* | | |
+ | Metric threshold | 80% | The threshold for the averaged capacity metric. |
+ | Duration (in minutes) | 30 | The timespan to average the capacity metric over is specific to usage patterns. The longer the duration, the smoother the reaction will be. Intermittent spikes will have less effect on the scale-out decision. However, it will also delay the scale-out trigger. |
+ | Time grain statistic | Average | |
+ |*Action* | | |
| Operation | Increase count by | | | Instance count | 1 | Scale out the Azure API Management instance by 1 unit. |
- | Cool down (minutes) | 60 | It takes at least 20 minutes for the Azure API Management service to scale out. In most cases, the cool down period of 60 minutes prevents from triggering many scale-outs. |
-
-8. Click **Add** to save the rule.
+ | Cool down (minutes) | 60 | It takes at least 20 minutes for the API Management service to scale out. In most cases, the cool down period of 60 minutes prevents from triggering many scale-outs. |
- ![Azure Monitor scale out rule](media/api-management-howto-autoscale/05.png)
+1. Select **Add** to save the rule.
+1. To add another rule, select **Add a rule**.
-9. Click again on **+ Add a rule**.
+ This time, a scale-in rule needs to be defined. It will ensure resources aren't being wasted, when the usage of APIs decreases.
- This time, a scale in rule needs to be defined. It will ensure resources are not being wasted, when the usage of APIs decreases.
+1. Define a new scale-in rule.
-10. Define a new scale in rule.
-
- For example, a scale in rule could trigger a removal of an Azure API Management unit, when the average capacity metric over the last 30 minutes has been lower than 35%. The table below provides configuration for such a rule.
+ For example, a scale-in rule could trigger a removal of 1 API Management unit when the average capacity metric over the previous 30 minutes has been lower than 35%. The following table provides configuration for such a rule.
| Parameter | Value | Notes | |--|-|--|
- | Metric source | Current resource | Define the rule based on the current Azure API Management resource metrics. |
+ | Metric source | Current resource | Define the rule based on the current API Management resource metrics. |
| *Criteria* | | | | Time aggregation | Average | |
- | Metric name | Capacity | Same metric as the one used for the scale out rule. |
- | Time grain statistic | Average | |
+ | Metric name | Capacity | Same metric as the one used for the scale-out rule. |
+ | Location | Select the primary location of the API Management instance | |
| Operator | Less than | |
- | Threshold | 35% | Similarly to the scale out rule, this value heavily depends on the usage patterns of the Azure API Management. |
- | Duration (in minutes) | 30 | Same value as the one used for the scale out rule. |
+ | Threshold | 35% | As with the scale-out rule, this value heavily depends on the usage patterns of the API Management instance. |
+ | Duration (in minutes) | 30 | Same value as the one used for the scale-out rule. |
+ | Time grain statistic | Average | |
| *Action* | | |
- | Operation | Decrease count by | Opposite to what was used for the scale out rule. |
- | Instance count | 1 | Same value as the one used for the scale out rule. |
- | Cool down (minutes) | 90 | Scale in should be more conservative than a scale out, so the cool down period should be longer. |
-
-11. Click **Add** to save the rule.
-
- ![Azure Monitor scale in rule](media/api-management-howto-autoscale/06.png)
-
-12. Set the **maximum** number of Azure API Management units.
+ | Operation | Decrease count by | Opposite to what was used for the scale-out rule. |
+ | Instance count | 1 | Same value as the one used for the scale-out rule. |
+ | Cool down (minutes) | 90 | Scale-in should be more conservative than a scale-out, so the cool down period should be longer. |
- > [!NOTE]
- > Azure API Management has a limit of units an instance can scale out to. The limit depends on a service tier.
+1. Select **Add** to save the rule.
- ![Screenshot that highlights where to set the maximum number of Azure API Management units.](media/api-management-howto-autoscale/07.png)
+1. In **Instance limits**, select the **Minimum**, **Maximum**, and **Default** number of API Management units.
+ > [!NOTE]
+ > API Management has a limit of units an instance can scale out to. The limit depends on the service tier.
+
+ :::image type="content" source="media/api-management-howto-autoscale/07.png" alt-text="Screenshot showing how to set instance limits in the portal.":::
-13. Click **Save**. Your autoscale has been configured.
+1. Select **Save**. Your autoscale has been configured.
## Next steps
api-management Api Management Howto Log Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md
Title: How to log events to Azure Event Hubs in Azure API Management | Microsoft Docs description: Learn how to log events to Azure Event Hubs in Azure API Management. Event Hubs is a highly scalable data ingress service. - -- Previously updated : 01/29/2018+ Last updated : 03/31/2023 # How to log events to Azure Event Hubs in Azure API Management
-Azure Event Hubs is a highly scalable data ingress service that can ingest millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Event Hubs acts as the "front door" for an event pipeline, and once data is collected into an event hub, it can be transformed and stored using any real-time analytics provider or batching/storage adapters. Event Hubs decouples the production of a stream of events from the consumption of those events, so that event consumers can access the events on their own schedule.
This article describes how to log API Management events using Azure Event Hubs.
-## Create an Azure Event Hub
+Azure Event Hubs is a highly scalable data ingress service that can ingest millions of events per second so that you can process and analyze the massive amounts of data produced by your connected devices and applications. Event Hubs acts as the "front door" for an event pipeline, and once data is collected into an event hub, it can be transformed and stored using any real-time analytics provider or batching/storage adapters. Event Hubs decouples the production of a stream of events from the consumption of those events, so that event consumers can access the events on their own schedule.
+
+## Prerequisites
+
+* An API Management service instance. If you don't have one, see [Create an API Management service instance](get-started-create-service-instance.md).
+* An Azure Event Hubs namespace and event hub. For detailed steps, see [Create an Event Hubs namespace and an event hub using the Azure portal](../event-hubs/event-hubs-create.md).
+ > [!NOTE]
+ > The Event Hubs resource **can be** in a different subscription or even a different tenant than the API Management resource
+
+## Configure access to the event hub
+
+To log events to the event hub, you need to configure credentials for access from API Management. API Management supports either of the two following access mechanisms:
+
+* An Event Hubs connection string
+* A managed identity for your API Management instance.
+
+### Option 1: Configure Event Hubs connection string
-For detailed steps on how to create an event hub and get connection strings that you need to send and receive events to and from the Event Hub, see [Create an Event Hubs namespace and an event hub using the Azure portal](../event-hubs/event-hubs-create.md).
+To create an Event Hubs connection string, see [Get an Event Hubs connection string](../event-hubs/event-hubs-get-connection-string.md).
+
+* You can use a connection string for the Event Hubs namespace or for the specific event hub you use for logging from API Management.
+* The shared access policy for the connection string must enable at least **Send** permissions.
+
+### Option 2: Configure API Management managed identity
> [!NOTE]
-> The Event Hub resource **can be** in a different subscription or even a different tenant than the API Management resource
+> Using an API Management managed identity for logging events to an event hub is supported in API Management REST API version `2022-04-01-preview` or later.
+
+1. Enable a system-assigned or user-assigned [managed identity for API Management](api-management-howto-use-managed-service-identity.md) in your API Management instance.
+
+ * If you enable a user-assigned managed identity, take note of the identity's **Client ID**.
+
+1. Assign the identity the **Azure Event Hubs Data sender** role, scoped to the Event Hubs namespace or to the event hub used for logging. To assign the role, use the [Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) or other Azure tools.
## Create an API Management logger
-Now that you have an Event Hub, the next step is to configure a [Logger](/rest/api/apimanagement/current-ga/logger) in your API Management service so that it can log events to the Event Hub.
-
-API Management loggers are configured using the [API Management REST API](/rest/api/apimanagement/ApiManagementREST/API-Management-REST). For detailed request examples, see [how to create Loggers](/rest/api/apimanagement/current-ga/logger/create-or-update).
-
-## Configure log-to-eventhub policies
-
-Once your logger is configured in API Management, you can configure your log-to-eventhub policy to log the desired events. The log-to-eventhub policy can be used in either the inbound policy section or the outbound policy section.
-
-1. Browse to your APIM instance.
-2. Select the API tab.
-3. Select the API to which you want to add the policy. In this example, we're adding a policy to the **Echo API** in the **Unlimited** product.
-4. Select **All operations**.
-5. On the top of the screen, select the Design tab.
-6. In the Inbound or Outbound processing window, click the triangle (next to the pencil).
-7. Select the Code editor. For more information, see [How to set or edit policies](set-edit-policies.md).
-8. Position your cursor in the `inbound` or `outbound` policy section.
-9. In the window on the right, select **Advanced policies** > **Log to EventHub**. This inserts the `log-to-eventhub` policy statement template.
-
-```xml
-<log-to-eventhub logger-id="logger-id">
- @{
- return new JObject(
- new JProperty("EventTime", DateTime.UtcNow.ToString()),
- new JProperty("ServiceName", context.Deployment.ServiceName),
- new JProperty("RequestId", context.RequestId),
- new JProperty("RequestIp", context.Request.IpAddress),
- new JProperty("OperationName", context.Operation.Name)
- ).ToString();
+The next step is to configure a [logger](/rest/api/apimanagement/current-ga/logger) in your API Management service so that it can log events to the event hub.
+
+Create and manage API Management loggers by using the [API Management REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) directly or by using tools including [Azure PowerShell](/powershell/module/az.apimanagement/new-azapimanagementlogger), a Bicep template, or an Azure Resource Management template.
+
+### Logger with connection string credentials
+
+For prerequisites, see [Configure Event Hubs connection string](#option-1-configure-event-hubs-connection-string).
+
+#### [PowerShell](#tab/PowerShell)
+
+The following example uses the [New-AzApiManagementLogger](/powershell/module/az.apimanagement/new-azapimanagementlogger) cmdlet to create a logger to an event hub by configuring a connection string.
+
+```powershell
+# API Management service-specific details
+$apimServiceName = "apim-hello-world"
+$resourceGroupName = "myResourceGroup"
+
+# Create logger
+$context = New-AzApiManagementContext -ResourceGroupName $resourceGroupName -ServiceName $apimServiceName
+New-AzApiManagementLogger -Context $context -LoggerId "ContosoLogger1" -Name "ApimEventHub" -ConnectionString "Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>" -Description "Event hub logger with connection string"
+```
+
+#### [Bicep](#tab/bicep)
+
+Include a snippet similar to the following in your Bicep template.
+
+```Bicep
+resource ehLoggerWithConnectionString 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+ name: 'ContosoLogger1'
+ parent: '<APIManagementInstanceName>'
+ properties: {
+ loggerType: 'azureEventHub'
+ description: 'Event hub logger with connection string'
+ credentials: {
+ connectionString: 'Endpoint=sb://<EventHubsNamespace>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>'
+ name: 'ApimEventHub'
}
-</log-to-eventhub>
+ }
+}
```
-Replace `logger-id` with the value you used for `{loggerId}` in the request URL to create the logger in the previous step.
-You can use any expression that returns a string as the value for the `log-to-eventhub` element. In this example, a string in JSON format containing the date and time, service name, request ID, request IP address, and operation name is logged.
+#### [ARM](#tab/arm)
+
+Include a JSON snippet similar to the following in your Azure Resource Manager template.
+
+```JSON
+{
+ "type": "Microsoft.ApiManagement/service/loggers",
+ "apiVersion": "2022-04-01-preview",
+ "name": "ContosoLogger1",
+ "properties": {
+ "loggerType": "azureEventHub",
+ "description": "Event hub logger with connection string",
+ "resourceId": "<EventHubsResourceID>"
+ "credentials": {
+ "connectionString": "Endpoint=sb://<EventHubsNamespace>/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<key>",
+ "name": "ApimEventHub"
+ },
+ }
+}
+```
++
+### Logger with system-assigned managed identity credentials
+
+For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity).
+
+#### [PowerShell](#tab/PowerShell)
-Click **Save** to save the updated policy configuration. As soon as it is saved the policy is active and events are logged to the designated Event Hub.
+Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with system-assigned managed identity credentials.
+
+#### [Bicep](#tab/bicep)
+
+Include a snippet similar to the following in your Bicep template.
+
+```Bicep
+resource ehLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+ name: 'ContosoLogger1'
+ parent: '<APIManagementInstanceName>'
+ properties: {
+ loggerType: 'azureEventHub'
+ description: 'Event hub logger with system-assigned managed identity'
+ credentials: {
+ endpointAddress: 'https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
+ identityClientId: 'systemAssigned'
+ name: 'ApimEventHub'
+ }
+ }
+}
+```
+
+#### [ARM](#tab/arm)
+
+Include a JSON snippet similar to the following in your Azure Resource Manager template.
+
+```JSON
+{
+ "type": "Microsoft.ApiManagement/service/loggers",
+ "apiVersion": "2022-04-01-preview",
+ "name": "ContosoLogger1",
+ "properties": {
+ "loggerType": "azureEventHub",
+ "description": "Event hub logger with system-assigned managed identity",
+ "resourceId": "<EventHubsResourceID>",
+ "credentials": {
+ "endpointAddress": "https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "identityClientId": "SystemAssigned",
+ "name": "ApimEventHub"
+ },
+ }
+}
+```
+
+### Logger with user-assigned managed identity credentials
+
+For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity).
+
+#### [PowerShell](#tab/PowerShell)
+
+Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with user-assigned managed identity credentials.
+
+#### [Bicep](#tab/bicep)
+
+Include a snippet similar the following in your Bicep template.
+
+```Bicep
+resource ehLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/loggers@2022-04-01-preview' = {
+ name: 'ContosoLogger1'
+ parent: '<APIManagementInstanceName>'
+ properties: {
+ loggerType: 'azureEventHub'
+ description: 'Event hub logger with user-assigned managed identity'
+ credentials: {
+ endpointAddress: 'https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>'
+ identityClientId: '<ClientID>'
+ name: 'ApimEventHub'
+ }
+ }
+}
+```
+
+#### [ARM](#tab/arm)
+
+Include a JSON snippet similar to the following in your Azure Resource Manager template.
+
+```JSON
+{
+ "type": "Microsoft.ApiManagement/service/loggers",
+ "apiVersion": "2022-04-01-preview",
+ "name": "ContosoLogger1",
+ "properties": {
+ "loggerType": "azureEventHub",
+ "description": "Event hub logger with user-assigned managed identity",
+ "resourceId": "<EventHubsResourceID>",
+ "credentials": {
+ "endpointAddress": "https://<EventHubsNamespace>.servicebus.windows.net/<EventHubName>",
+ "identityClientId": "<ClientID>",
+ "name": "ApimEventHub"
+ },
+ }
+}
+```
++
+## Configure log-to-eventhub policy
+
+Once your logger is configured in API Management, you can configure your [log-to-eventhub](log-to-eventhub-policy.md) policy to log the desired events. For example, use the `log-to-eventhub` policy in the inbound policy section to log requests, or in the outbound policy section to log responses.
+
+1. Browse to your API Management instance.
+1. Select **APIs**, and then select the API to which you want to add the policy. In this example, we're adding a policy to the **Echo API** in the **Unlimited** product.
+1. Select **All operations**.
+1. On the top of the screen, select the **Design** tab.
+1. In the Inbound processing or Outbound processing window, select the `</>` (code editor) icon. For more information, see [How to set or edit policies](set-edit-policies.md).
+1. Position your cursor in the `inbound` or `outbound` policy section.
+1. In the window on the right, select **Advanced policies** > **Log to EventHub**. This inserts the `log-to-eventhub` policy statement template.
+
+ ```xml
+ <log-to-eventhub logger-id="logger-id">
+ @{
+ return new JObject(
+ new JProperty("EventTime", DateTime.UtcNow.ToString()),
+ new JProperty("ServiceName", context.Deployment.ServiceName),
+ new JProperty("RequestId", context.RequestId),
+ new JProperty("RequestIp", context.Request.IpAddress),
+ new JProperty("OperationName", context.Operation.Name)
+ ).ToString();
+ }
+ </log-to-eventhub>
+ ```
+
+ 1. Replace `logger-id` with the name of the logger that you created in the previous step.
+ 1. You can use any expression that returns a string as the value for the `log-to-eventhub` element. In this example, a string in JSON format containing the date and time, service name, request ID, request IP address, and operation name is logged.
+
+1. Select **Save** to save the updated policy configuration. As soon as it's saved, the policy is active and events are logged to the designated event hub.
> [!NOTE]
-> The maximum supported message size that can be sent to an event hub from this API Management policy is 200 kilobytes (KB). If a message that is sent to an event hub is larger than 200 KB, it will be automatically truncated, and the truncated message will be transferred to event hubs.
+> The maximum supported message size that can be sent to an event hub from this API Management policy is 200 kilobytes (KB). If a message that is sent to an event hub is larger than 200 KB, it will be automatically truncated, and the truncated message will be transferred to the event hub.
## Preview the log in Event Hubs by using Azure Stream Analytics
You can preview the log in Event Hubs by using [Azure Stream Analytics queries](
1. In the Azure portal, browse to the event hub that the logger sends events to. 2. Under **Features**, select the **Process data** tab.
-3. On the **Enable real time insights from events** card, select **Explore**.
+3. On the **Enable real time insights from events** card, select **Start**.
4. You should be able to preview the log on the **Input preview** tab. If the data shown isn't current, select **Refresh** to see the latest events. ## Next steps
You can preview the log in Event Hubs by using [Azure Stream Analytics queries](
* [Receive messages with EventProcessorHost](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md) * [Event Hubs programming guide](../event-hubs/event-hubs-programming-guide.md) * Learn more about API Management and Event Hubs integration
- * [Logger entity reference](/rest/api/apimanagement/current-ga/logger)
- * [log-to-eventhub policy reference](log-to-eventhub-policy.md)
- * [Monitor your APIs with Azure API Management, Event Hubs, and Moesif](api-management-log-to-eventhub-sample.md)
+ * [Logger entity reference](/rest/api/apimanagement/current-preview/logger)
+ * [log-to-eventhub](log-to-eventhub-policy.md) policy reference
* Learn more about [integration with Azure Application Insights](api-management-howto-app-insights.md)-
-[publisher-portal]: ./media/api-management-howto-log-event-hubs/publisher-portal.png
-[create-event-hub]: ./media/api-management-howto-log-event-hubs/create-event-hub.png
-[event-hub-connection-string]: ./media/api-management-howto-log-event-hubs/event-hub-connection-string.png
-[event-hub-dashboard]: ./media/api-management-howto-log-event-hubs/event-hub-dashboard.png
-[receiving-policy]: ./media/api-management-howto-log-event-hubs/receiving-policy.png
-[sending-policy]: ./media/api-management-howto-log-event-hubs/sending-policy.png
-[event-hub-policy]: ./media/api-management-howto-log-event-hubs/event-hub-policy.png
-[add-policy]: ./media/api-management-howto-log-event-hubs/add-policy.png
api-management Api Management Howto Use Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-use-managed-service-identity.md
Title: Use managed identities in Azure API Management | Microsoft Docs
-description: Learn how to create system-assigned and user-assigned identities in API Management by using the Azure portal, PowerShell, and a Resource Manager template.
+description: Learn how to create system-assigned and user-assigned identities in API Management by using the Azure portal, PowerShell, and a Resource Manager template. Learn about supported scenarios with managed identities.
documentationcenter: '' Previously updated : 04/05/2022 Last updated : 03/31/2023
API Management is a trusted Microsoft service to the following resources. This a
|Azure Service Bus | [Trusted-access-to-azure-service-bus](../service-bus-messaging/service-bus-ip-filtering.md#trusted-microsoft-services)| |Azure Event Hubs | [Trusted-access-to-azure-event-hub](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services)|
+### Log events to an event hub
+
+You can configure and use a system-assigned managed identity to access an event hub for logging events from an API Management instance. For more information, see [How to log events to Azure Event Hubs in Azure API Management](api-management-howto-log-event-hubs.md).
+ ## Create a user-assigned managed identity > [!NOTE]
You can use a user-assigned managed identity to access Azure Key Vault to store
You can use the user-assigned identity to authenticate to a backend service through the [authentication-managed-identity](authentication-managed-identity-policy.md) policy.
+### Log events to an event hub
+
+You can configure and use a user-assigned managed identity to access an event hub for logging events from an API Management instance. For more information, see [How to log events to Azure Event Hubs in Azure API Management](api-management-howto-log-event-hubs.md).
+ ## <a name="remove"></a>Remove an identity You can remove a system-assigned identity by disabling the feature through the portal or the Azure Resource Manager template in the same way that it was created. User-assigned identities can be removed individually. To remove all identities, set the identity type to `"None"`.
api-management Upgrade And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/upgrade-and-scale.md
Previously updated : 09/14/2022 Last updated : 03/30/2023 + # Upgrade and scale an Azure API Management instance
Customers can scale an Azure API Management instance in a dedicated service tier
[!INCLUDE [premium-dev-standard-basic.md](../../includes/api-management-availability-premium-dev-standard-basic.md)] > [!NOTE]
-> API Management instances in the **Consumption** tier scale automatically based on the traffic. Currently, you cannot upgrade from or downgrade to the Consumption tier.
+> * In the **Standard** and **Premium** tiers of the API Management service, you can configure an instance to [scale automatically](api-management-howto-autoscale.md) based on a set of rules.
+> * API Management instances in the **Consumption** tier scale automatically based on the traffic. Currently, you cannot upgrade from or downgrade to the Consumption tier.
The throughput and price of each unit depend on the [service tier](api-management-features.md) in which the unit exists. If you need to increase capacity for a service within a tier, you should add a unit. If the tier that is currently selected in your API Management instance doesn't allow adding more units, you need to upgrade to a higher-level tier.
You can choose between four dedicated tiers: **Developer**, **Basic**, **Standa
1. Navigate to your API Management instance in the [Azure portal](https://portal.azure.com/). 1. Select **Locations** from the menu. 1. Select the row with the location you want to scale.
-1. Specify the new number of **Units** - use the slider if available, or type the number.
+1. Specify the new number of **Units** - use the slider if available, or select or type the number.
1. Select **Apply**. > [!NOTE]
api-management Validate Azure Ad Token Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-azure-ad-token-policy.md
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
> [!NOTE] > To validate a JWT that was provided by another identity provider, API Management also provides the generic [`validate-jwt`](validate-jwt-policy.md) policy. - [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
| Attribute | Description | Required | Default | | - | | -- | |
-| tenant-id | Tenant ID or URL of the Azure Active Directory service. | Yes | N/A |
-| header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | `Authorization` |
-| query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| token-value | Expression returning a string containing the token. You must not return `Bearer` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. | No | 401 |
-| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. | No | Default error message depends on validation issue, for example "JWT not present." |
-| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A |
+| tenant-id | Tenant ID or URL of the Azure Active Directory service. Policy expressons are allowed.| Yes | N/A |
+| header-name | The name of the HTTP header holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | `Authorization` |
+| query-parameter-name | The name of the query parameter holding the token. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer` as part of the token value. Policy expressions are allowed. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| failed-validation-httpcode | HTTP status code to return if the JWT doesn't pass validation. Policy expressions are allowed. | No | 401 |
+| failed-validation-error-message | Error message to return in the HTTP response body if the JWT doesn't pass validation. This message must have any special characters properly escaped. Policy expressions are allowed. | No | Default error message depends on validation issue, for example "JWT not present." |
+| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation. Policy expressions aren't allowed. | No | N/A |
++
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
| Attribute | Description | Required | Default | | - | | -- | |
-| name | Name of the claim as it is expected to appear in the token. | Yes | N/A |
-| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all |
-| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
+| name | Name of the claim as it is expected to appear in the token. Policy expressions are allowed.| Yes | N/A |
+| match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed.<br/><br/>Policy expressions are allowed. | No | all |
+| separator | String. Specifies a separator (for example, ",") to be used for extracting a set of values from a multi-valued claim. Policy expressions are allowed. | No | N/A |
## Usage
The `validate-azure-ad-token` policy enforces the existence and validity of a JS
### Usage notes
-* This policy can only be used with an Azure Active Directory tenant in the global Azure cloud. It doesn't support tenants configured in regional clouds or Azure clouds with restricted access.
-* Currently, this policy can only validate "v1" tokens from Azure Active Directory. Support for "v2" tokens will be added in a future release.
* You can use access restriction policies in different scopes for different purposes. For example, you can secure the whole API with Azure AD authentication by applying the `validate-azure-ad-token` policy on the API level, or you can apply it on the API operation level and use `claims` for more granular control. ## Examples
For more details on optional claims, read [Provide optional claims to your app](
``` ## Related policies + * [API Management access restriction policies](api-management-access-restriction-policies.md) [!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]+
app-service Manage Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-disaster-recovery.md
Title: Recover from region-wide failure
description: Learn how Azure App Service helps you maintain business continuity and disaster recovery (BCDR) capabilities. Recover your app from a region-wide failure in Azure. Previously updated : 06/09/2020 Last updated : 03/31/2023 #Customer intent: As an Azure service administrator, I want to recover my App Service app from a region-wide failure in Azure. - # Move an App Service app to another region
+> [!IMPORTANT]
+> **Beginning 31 March 2025, we'll no longer place Azure App Service web applications in disaster recovery mode in the event of a disaster in an Azure region.** We strongly encourage you to implement [commonly used disaster recovery techniques](./overview-disaster-recovery.md) to prevent loss of functionality or data for your web apps if there's a regional disaster.
+ This article describes how to bring App Service resources back online in a different Azure region during a disaster that impacts an entire Azure region. When a disaster brings an entire Azure region offline, all App Service apps hosted in that region are placed in disaster recovery mode. Features are available to help you restore the app to a different region or recover files from the impacted app. App Service resources are region-specific and can't be moved across regions. You must restore the app to a new app in a different region, and then create mirroring configurations or resources for the new app. ++ ## Prerequisites - None. [Restoring an automatic backup](manage-backup.md#restore-a-backup) usually requires **Standard** or **Premium** tier, but in disaster recovery mode, it's automatically enabled for your impacted app, regardless which tier the impacted app is in.
If you only want to recover the files from the impacted app without restoring it
![Screenshot of a FileZilla file hierarchy. The wwwroot folder is highlighted, and its shortcut menu is visible. In that menu, Download is highlighted.](media/manage-disaster-recovery/download-content.png) ## Next steps
-[Backup and restore](manage-backup.md)
+[Backup and restore](manage-backup.md)
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Learn more about migrating from Bing Maps to Azure Maps.
<!End Links--> [road tiles]: /rest/api/maps/render/getmaptile [satellite tiles]: /rest/api/maps/render/getmapimagerytile
-[Cesium]: https://www.cesium.com/?azure-portal=true
-<!--[Cesium code samples]: https://samples.azuremaps.com/?search=Cesium&azure-portal=true-->
+[Cesium]: https://www.cesium.com/
+<!--[Cesium code samples]: https://samples.azuremaps.com/?search=Cesium-->
[Cesium plugin]: /samples/azure-samples/azure-maps-cesium/azure-maps-cesium-js-plugin
-[Leaflet]: https://leafletjs.com/?azure-portal=true
-[Leaflet code samples]: https://samples.azuremaps.com/?search=leaflet&azure-portal=true
+[Leaflet]: https://leafletjs.com/
+[Leaflet code samples]: https://samples.azuremaps.com/?search=leaflet
[Leaflet plugin]: /samples/azure-samples/azure-maps-leaflet/azure-maps-leaflet-plugin
-[OpenLayers]: https://openlayers.org/?azure-portal=true
-<!--[OpenLayers code samples]: https://samples.azuremaps.com/?search=openlayers&azure-portal=true-->
-[OpenLayers plugin]: /samples/azure-samples/azure-maps-OpenLayers/azure-maps-OpenLayers-plugin?azure-portal=true
+[OpenLayers]: https://openlayers.org/
+<!--[OpenLayers code samples]: https://samples.azuremaps.com/?search=openlayers-->
+[OpenLayers plugin]: /samples/azure-samples/azure-maps-OpenLayers/azure-maps-OpenLayers-plugin
<! If developing using a JavaScript framework, one of the following open-source projects may be useful ->
-[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps?azure-portal=true
-[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components?azure-portal=true
-[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps?azure-portal=true
-[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps?azure-portal=true
+[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
+[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components
+[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
+[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
<!-- Key features support ->
-[Contour layer code samples]: https://samples.azuremaps.com/?search=contour&azure-portal=true
-[Gridded Data Source module]: https://github.com/Azure-Samples/azure-maps-gridded-data-source?azure-portal=true
-[Animation module]: https://github.com/Azure-Samples/azure-maps-animations?azure-portal=true
+[Contour layer code samples]: https://samples.azuremaps.com/?search=contour
+[Gridded Data Source module]: https://github.com/Azure-Samples/azure-maps-gridded-data-source
+[Animation module]: https://github.com/Azure-Samples/azure-maps-animations
[Spatial IO module]: how-to-use-spatial-io-module.md [open-source modules for the web SDK]: open-source-projects.md#open-web-sdk-modules
Learn more about migrating from Bing Maps to Azure Maps.
[Polygon layer options]: /javascript/api/azure-maps-control/atlas.polygonlayeroptions [Add a popup]: map-add-popup.md
-[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content&azure-portal=true
-[Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes&azure-portal=true
-[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins&azure-portal=true
+[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content
+[Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes
+[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins
[Popup class]: /javascript/api/azure-maps-control/atlas.popup [Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
Learn more about migrating from Bing Maps to Azure Maps.
[Tile layer options]: /javascript/api/azure-maps-control/atlas.tilelayeroptions [Show traffic on the map]: map-show-traffic.md
-[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options&azure-portal=true
-[Traffic control]: https://samples.azuremaps.com/?sample=traffic-controls&azure-portal=true
+[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options
+[Traffic control]: https://samples.azuremaps.com/?sample=traffic-controls
[Overlay an image]: map-add-image-layer.md [Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
Learn more about migrating from Bing Maps to Azure Maps.
[SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions [Use the drawing tools module]: set-drawing-options.md
-[Drawing tools module code samples]: https://samples.azuremaps.com?azure-portal=true#drawing-tools-module
+[Drawing tools module code samples]: https://samples.azuremaps.com#drawing-tools-module
<!>
-[free account]: https://azure.microsoft.com/free/?azure-portal=true
+[free account]: https://azure.microsoft.com/free/
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication
Learn more about migrating from Bing Maps to Azure Maps.
[atlas.data namespace]: /javascript/api/azure-maps-control/atlas.data [atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape [atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
-[turf js]: https://turfjs.org?azure-portal=true
+[turf js]: https://turfjs.org
[Azure Maps Glossary]: glossary.md [Add controls to a map]: map-add-controls.md [Localization support in Azure Maps]: supported-languages.md
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
The following table provides the Azure Maps service APIs that provide similar fu
| Traffic Incidents | [Traffic Incident Details] | | Elevations | <sup>1</sup> |
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
The following service APIs aren't currently available in Azure Maps: * Optimized Itinerary Routes - Planned. Azure Maps Route API does support traveling salesmen optimization for a single vehicle. * Imagery Metadata ΓÇô Primarily used for getting tile URLs in Bing Maps. Azure Maps has a standalone service for directly accessing map tiles.
-* Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md)
+* Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md)
Azure Maps also has these REST web
Learn more about the Azure Maps REST services.
[Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md [Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md
-[free account]: https://azure.microsoft.com/free/?azure-portal=true
+[free account]: https://azure.microsoft.com/free/
[manage authentication in Azure Maps]: how-to-manage-authentication.md [Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
Learn more about the Azure Maps REST services.
[Calculate route]: /rest/api/maps/route/getroutedirections [Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
-[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path?azure-portal=true
-[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic?azure-portal=true
+[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path
+[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
[quadtree tile pyramid math]: zoom-levels-and-tile-grid.md
-[turf js]: https://turfjs.org?azure-portal=true
-[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite?azure-portal=true
+[turf js]: https://turfjs.org
+[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite
[Map image render]: /rest/api/maps/render/getmapimagerytile [Supported map styles]: supported-map-styles.md
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
The following table provides a high-level list of Bing Maps features and the rel
| Traffic Incidents | Γ£ô | | Configuration driven maps | N/A |
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
Bing Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and highly secure, Azure Active Directory authentication.
Learn the details of how to migrate your Bing Maps application with these articl
[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md [azure.com]: https://azure.com [Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
- [Azure Maps Q&A]: /answers/topics/azure-maps.html
+[Azure Maps Q&A]: /answers/topics/azure-maps.html
[Azure support options]: https://azure.microsoft.com/support/options/ [Azure Maps product page]: https://azure.com/maps [Azure Maps product documentation]: https://aka.ms/AzureMapsDocs [Azure Maps code samples]: https://aka.ms/AzureMapsSamples [Azure Maps developer forums]: https://aka.ms/AzureMapsForums [Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
-[Azure Maps Blog]: https://aka.ms/AzureMapsBlog
+[Azure Maps Blog]: https://aka.ms/AzureMapsTechBlog
[Azure Maps Feedback (UserVoice)]: https://aka.ms/AzureMapsFeedback
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
# Tutorial: Migrate a web app from Google Maps
-Most web apps, which use Google Maps, are using the Google Maps V3 JavaScript SDK. The Azure Maps Web SDK is the suitable Azure-based SDK to migrate to. The Azure Maps Web SDK lets you customize interactive maps with your own content and imagery. You can run your app on both web or mobile applications. This control makes use of WebGL, allowing you to render large data sets with high performance. Develop with this SDK using JavaScript or TypeScript. In this tutorial, you will learn how to:
+Most web apps, which use Google Maps, are using the Google Maps V3 JavaScript SDK. The Azure Maps Web SDK is the suitable Azure-based SDK to migrate to. The Azure Maps Web SDK lets you customize interactive maps with your own content and imagery. You can run your app on both web or mobile applications. This control makes use of WebGL, allowing you to render large data sets with high performance. Develop with this SDK using JavaScript or TypeScript. This tutorial demonstrates:
> [!div class="checklist"] > * Load a map
Most web apps, which use Google Maps, are using the Google Maps V3 JavaScript SD
> * Show traffic data > * Add a ground overlay
-You will also learn:
+Also:
> [!div class="checklist"] > * How to accomplish common mapping tasks using the Azure Maps Web SDK. > * Best practices to improve performance and user experience.
-> * Tips on how to make your application using more advance features available in Azure Maps.
+> * Tips on how to make your application using more advanced features available in Azure Maps.
-If migrating an existing web application, check to see if it is using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you do not want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile)
-\| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
+If migrating an existing web application, check to see if it's using an open-source map control library. Examples of open-source map control library are: Cesium, Leaflet, and OpenLayers. You can still migrate your application, even if it uses an open-source map control library, and you don't want to use the Azure Maps Web SDK. In such case, connect your application to the Azure Maps tile services ([road tiles]
+\| [satellite tiles]). The following points detail on how to use Azure Maps in some commonly used open-source map control libraries.
-* Cesium - A 3D map control for the web. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-cesium) \| [Documentation](https://www.cesium.com/)
-* Leaflet ΓÇô Lightweight 2D map control for the web. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-leaflet) \| [Documentation](https://leafletjs.com/)
-* OpenLayers - A 2D map control for the web that supports projections. [Code sample](https://samples.azuremaps.com/?sample=render-azure-maps-in-openlayers) \| [Documentation](https://openlayers.org/)
+* Cesium - A 3D map control for the web. [Cesium documentation].
+* Leaflet ΓÇô Lightweight 2D map control for the web. [Leaflet code sample] \| [Leaflet documentation].
+* OpenLayers - A 2D map control for the web that supports projections. [OpenLayers documentation].
If developing using a JavaScript framework, one of the following open-source projects may be useful:
-* [ng-azure-maps](https://github.com/arnaudleclerc/ng-azure-maps) - Angular 10 wrapper around Azure maps.
-* [AzureMapsControl.Components](https://github.com/arnaudleclerc/AzureMapsControl.Components) - An Azure Maps Blazor component.
-* [Azure Maps React Component](https://github.com/WiredSolutions/react-azure-maps) - A react wrapper for the Azure Maps control.
-* [Vue Azure Maps](https://github.com/rickyruiz/vue-azure-maps) - An Azure Maps component for Vue application.
+* [ng-azure-maps] - Angular 10 wrapper around Azure maps.
+* [AzureMapsControl.Components] - An Azure Maps Blazor component.
+* [Azure Maps React Component] - A react wrapper for the Azure Maps control.
+* [Vue Azure Maps] - An Azure Maps component for Vue application.
## Prerequisites
The table lists key API features in the Google Maps V3 JavaScript SDK and the su
| Distance Matrix service | Γ£ô | | Elevation service | <sup>1</sup> |
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
## Notable differences in the web SDKs The following are some key differences between the Google Maps and Azure Maps Web SDKs, to be aware of: -- In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available. Embed the Web SDK package into apps. For more information, see this [documentation](how-to-use-map-control.md). This package also includes TypeScript definitions.-- You first need to create an instance of the Map class in Azure Maps. Wait for the maps `ready` or `load` event to fire before programmatically interacting with the map. This order will ensure that all the map resources have been loaded and are ready to be accessed.-- Both platforms use a similar tiling system for the base maps. The tiles in Google Maps are 256 pixels in dimension; however, the tiles in Azure Maps are 512 pixels in dimension. To get the same map view in Azure Maps as Google Maps, subtract Google Maps zoom level by the number one in Azure Maps.-- Coordinates in Google Maps are referred to as `latitude,longitude`, while Azure Maps uses `longitude,latitude`. The Azure Maps format is aligned with the standard `[x, y]`, which is followed by most GIS platforms.-- Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [*atlas.data* namespace](/javascript/api/azure-maps-control/atlas.data). There's also the [*atlas.Shape*](/javascript/api/azure-maps-control/atlas.shape) class. Use this class to wrap GeoJSON objects, to make it easy to update and maintain the data bindable way.-- Coordinates in Azure Maps are defined as Position objects. A coordinate is specified as a number array in the format `[longitude,latitude]`. Or, it's specified using new atlas.data.Position(longitude, latitude).
+* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available. For more information on how to Embed the Web SDK package into apps, see [Use the Azure Maps map control]. This package also includes TypeScript definitions.
+* You first need to create an instance of the Map class in Azure Maps. Wait for the maps `ready` or `load` event to fire before programmatically interacting with the map. This order ensures that all the map resources have been loaded and are ready to be accessed.
+* Both platforms use a similar tiling system for the base maps. The tiles in Google Maps are 256 pixels in dimension; however, the tiles in Azure Maps are 512 pixels in dimension. To get the same map view in Azure Maps as Google Maps, subtract Google Maps zoom level by the number one in Azure Maps.
+* Coordinates in Google Maps are referred to as `latitude,longitude`, while Azure Maps uses `longitude,latitude`. The Azure Maps format is aligned with the standard `[x, y]`, which is followed by most GIS platforms.
+* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [*atlas.data* namespace]. There's also the [*atlas.Shape*] class. Use this class to wrap GeoJSON objects, to make it easy to update and maintain the data bindable way.
+* Coordinates in Azure Maps are defined as Position objects. A coordinate is specified as a number array in the format `[longitude,latitude]`. Or, it's specified using new atlas.data.Position(longitude, latitude).
> [!TIP]
- > The Position class has a static helper method for importing coordinates that are in "latitude, longitude" format. The [atlas.data.Position.fromLatLng](/javascript/api/azure-maps-control/atlas.data.position) method can often be replaced with the `new google.maps.LatLng` method in Google Maps code.
-- Rather than specifying styling information on each shape that is added to the map, Azure Maps separates styles from the data. Data is stored in a data source, and is connected to rendering layers. Azure Maps code uses data sources to render the data. This approach provides enhanced performance benefit. Additionally, many layers support data-driven styling where business logic can be added to layer style options. This support changes how individual shapes are rendered within a layer based on properties defined in the shape.
+ > The Position class has a static helper method for importing coordinates that are in "latitude, longitude" format. The [atlas.data.Position.fromLatLng] method can often be replaced with the `new google.maps.LatLng` method in Google Maps code.
+* Rather than specifying styling information on each shape that is added to the map, Azure Maps separates styles from the data. Data is stored in a data source, and is connected to rendering layers. Azure Maps code uses data sources to render the data. This approach provides enhanced performance benefit. Additionally, many layers support data-driven styling where business logic can be added to layer style options. This support changes how individual shapes are rendered within a layer based on properties defined in the shape.
## Web SDK side-by-side examples
-This collection has code samples for each platform, and each sample covers a common use case. It's intended to help you migrate your web application from Google Maps V3 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript. However, Azure Maps also provides TypeScript definitions as an additional option through an [npm module](how-to-use-map-control.md).
+This collection has code samples for each platform, and each sample covers a common use case. It's intended to help you migrate your web application from Google Maps V3 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript. However, Azure Maps also provides TypeScript definitions as another option through an [npm module].
**Topics**
-* [Load a map](#load-a-map)
-* [Localizing the map](#localizing-the-map)
-* [Setting the map view](#setting-the-map-view)
-* [Adding a marker](#adding-a-marker)
-* [Adding a custom marker](#adding-a-custom-marker)
-* [Adding a polyline](#adding-a-polyline)
-* [Adding a polygon](#adding-a-polygon)
-* [Display an info window](#display-an-info-window)
-* [Import a GeoJSON file](#import-a-geojson-file)*
-* [Marker clustering](#marker-clustering)
-* [Add a heat map](#add-a-heat-map)
-* [Overlay a tile layer](#overlay-a-tile-layer)
-* [Show traffic data](#show-traffic-data)
-* [Add a ground overlay](#add-a-ground-overlay)
-* [Add KML data to the map](#add-kml-data-to-the-map)
+* [Load a map]
+* [Localizing the map]
+* [Setting the map view]
+* [Adding a marker]
+* [Adding a custom marker]
+* [Adding a polyline]
+* [Adding a polygon]
+* [Display an info window]
+* [Import a GeoJSON file]
+* [Marker clustering]
+* [Add a heat map]
+* [Overlay a tile layer]
+* [Show traffic data]
+* [Add a ground overlay]
+* [Add KML data to the map]
### Load a map Both SDKs have the same steps to load a map: * Add a reference to the Map SDK.
-* Add a `div` tag to the body of the page, which will act as a placeholder for the map.
+* Add a `div` tag to the body of the page, which acts as a placeholder for the map.
* Create a JavaScript function that gets called when the page has loaded. * Create an instance of the respective map class.
Both SDKs have the same steps to load a map:
* Google maps requires an account key to be specified in the script reference of the API. Authentication credentials for Azure Maps are specified as options of the map class. This credential can be a subscription key or Azure Active Directory information. * Google Maps accepts a callback function in the script reference of the API, which is used to call an initialization function to load the map. With Azure Maps, the onload event of the page should be used.
-* When referencing the `div` element in which the map will be rendered, the `Map` class in Azure Maps only requires the `id` value while Google Maps requires a `HTMLElement` object.
+* When referencing the `div` element in which the map renders, the `Map` class in Azure Maps only requires the `id` value while Google Maps requires a `HTMLElement` object.
* Coordinates in Azure Maps are defined as Position objects, which can be specified as a simple number array in the format `[longitude, latitude]`. * The zoom level in Azure Maps is one level lower than the zoom level in Google Maps. This discrepancy is because the difference in the sizes of tiling system of the two platforms. * Azure Maps doesn't add any navigation controls to the map canvas. So, by default, a map doesn't have zoom buttons and map style buttons. But, there are control options for adding a map style picker, zoom buttons, compass or rotation control, and a pitch control.
-* An event handler is added in Azure Maps to monitor the `ready` event of the map instance. This event will fire when the map has finished loading the WebGL context and all the needed resources. Add any code you want to run after the map completes loading, to this event handler.
+* An event handler is added in Azure Maps to monitor the `ready` event of the map instance. This event fires when the map has finished loading the WebGL context and all the needed resources. Add any code you want to run after the map completes loading, to this event handler.
The basic examples below uses Google Maps to load a map centered over New York at coordinates. The longitude: -73.985, latitude: 40.747, and the map is at zoom level of 12.
Display a Google Map centered and zoomed over a location.
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Simple Google Maps](media/migrate-google-maps-web-app/simple-google-map.png)
Load a map with the same view in Azure Maps along with a map style control and z
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Simple Azure Maps](media/migrate-google-maps-web-app/simple-azure-maps.png)
-Find detailed documentation on how to set up and use the Azure Maps map control in a web app, by clicking [here](how-to-use-map-control.md).
+For more information on how to set up and use the Azure Maps map control in a web app, see [Use the Azure Maps map control].
> [!NOTE] > Unlike Google Maps, Azure Maps does not require an initial center and a zoom level to load the map. If this information is not provided when loading the map, Azure maps will try to determine city of the user. It will center and zoom the map there.
-**Additional resources:**
+**More resources:**
-* Azure Maps also provides navigation controls for rotating and pitching the map view, as documented [here](map-add-controls.md).
+* For more information on navigation controls for rotating and pitching the map view, see [Add controls to a map].
### Localizing the map
To localize Google Maps, add language and region parameters.
<script type="text/javascript" src=" https://maps.googleapis.com/maps/api/js?callback=initMap&key={api-Key}& language={language-code}&region={region-code}" async defer></script> ```
-Here is an example of Google Maps with the language set to "fr-FR".
+Here's an example of Google Maps with the language set to "fr-FR".
![Google Maps localization](media/migrate-google-maps-web-app/google-maps-localization.png) #### After: Azure Maps
-Azure Maps provides two different ways of setting the language and regional view of the map. The first option is to add this information to the global *atlas* namespace. It will result in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to "Auto":
+Azure Maps provides two different ways of setting the language and regional view of the map. The first option is to add this information to the global *atlas* namespace. It results in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to "Auto":
```javascript atlas.setLanguage('fr-FR');
map = new atlas.Map('myMap', {
> [!NOTE] > With Azure Maps, it is possible to load multiple map instances on the same page with different language and region settings. It is also possible to update these settings in the map after it has loaded.
-Find a detailed list of [supported languages](supported-languages.md) in Azure Maps.
+For more information on supported languages, see [Localization support in Azure Maps].
-Here is an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
+Here's an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
![Azure Maps localization](media/migrate-google-maps-web-app/azure-maps-localization.png)
map.setStyle({
![Azure Maps set view](media/migrate-google-maps-web-app/azure-maps-set-view.jpeg)
-**Additional resources:**
+**More resources:**
-* [Choose a map style](choose-map-style.md)
-* [Supported map styles](supported-map-styles.md)
+* [Choose a map style]
+* [Supported map styles]
### Adding a marker
var marker = new google.maps.Marker({
**After: Azure Maps using HTML Markers**
-In Azure Maps, use HTML markers to display a point on the map. HTML markers are recommended for apps that only need to display a small number of points on the map. To use an HTML marker, create an instance of the `atlas.HtmlMarker` class. Set the text and position options, and add the marker to the map using the `map.markers.add` method.
+In Azure Maps, use HTML markers to display a point on the map. HTML markers are recommended for apps that only need to display a few points on the map. To use an HTML marker, create an instance of the `atlas.HtmlMarker` class. Set the text and position options, and add the marker to the map using the `map.markers.add` method.
```javascript //Create a HTML marker and add it to the map.
For a Symbol layer, add the data to a data source. Attach the data source to the
![Azure Maps symbol layer](media/migrate-google-maps-web-app/azure-maps-symbol-layer.png)
-**Additional resources:**
+**More resources:**
-- [Create a data source](create-data-source-web-sdk.md)-- [Add a Symbol layer](map-add-pin.md)-- [Add a Bubble layer](map-add-bubble-layer.md)-- [Cluster point data](clustering-point-data-web-sdk.md)-- [Add HTML Markers](map-add-custom-html.md)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)-- [Symbol layer icon options](/javascript/api/azure-maps-control/atlas.iconoptions)-- [Symbol layer text option](/javascript/api/azure-maps-control/atlas.textoptions)-- [HTML marker class](/javascript/api/azure-maps-control/atlas.htmlmarker)-- [HTML marker options](/javascript/api/azure-maps-control/atlas.htmlmarkeroptions)
+* [Create a data source]
+* [Add a Symbol layer]
+* [Add a Bubble layer]
+* [Clustering point data in the Web SDK]
+* [Add HTML Markers]
+* [Use data-driven style expressions]
+* [Symbol layer icon options]
+* [Symbol layer text option]
+* [HTML marker class]
+* [HTML marker options]
### Adding a custom marker
-You may use Custom images to represent points on a map. The map below uses a custom image to display a point on the map. The point is displayed at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
+You may use Custom images to represent points on a map. The following map uses a custom image to display a point on the map. The point is displayed at latitude: 51.5 and longitude: -0.2. The anchor offsets the position of the marker, so that the point of the pushpin icon aligns with the correct position on the map.
<center>
Symbol layers in Azure Maps support custom images as well. First, load the image
> [!TIP] > To render advanced custom points, use multiple rendering layers together. For example, let's say you want to have multiple pushpins that have the same icon on different colored circles. Instead of creating a bunch of images for each color overlay, add a symbol layer on top of a bubble layer. Have the pushpins reference the same data source. This approach will be more efficient than creating and maintaining a bunch of different images.
-**Additional resources:**
+**More resources:**
-- [Create a data source](create-data-source-web-sdk.md)-- [Add a Symbol layer](map-add-pin.md)-- [Add HTML Markers](map-add-custom-html.md)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)-- [Symbol layer icon options](/javascript/api/azure-maps-control/atlas.iconoptions)-- [Symbol layer text option](/javascript/api/azure-maps-control/atlas.textoptions)-- [HTML marker class](/javascript/api/azure-maps-control/atlas.htmlmarker)-- [HTML marker options](/javascript/api/azure-maps-control/atlas.htmlmarkeroptions)
+* [Create a data source]
+* [Add a Symbol layer]
+* [Add HTML Markers]
+* [Use data-driven style expressions]
+* [Symbol layer icon options]
+* [Symbol layer text option]
+* [HTML marker class]
+* [HTML marker options]
### Adding a polyline
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
![Azure Maps polyline](media/migrate-google-maps-web-app/azure-maps-polyline.png)
-**Additional resources:**
+**More resources:**
-- [Add lines to the map](map-add-line-layer.md)-- [Line layer options](/javascript/api/azure-maps-control/atlas.linelayeroptions)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add lines to the map]
+* [Line layer options]
+* [Use data-driven style expressions]
### Adding a polygon
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
![Azure Maps polygon](media/migrate-google-maps-web-app/azure-maps-polygon.png)
-**Additional resources:**
+**More resources:**
-- [Add a polygon to the map](map-add-shape.md)-- [Add a circle to the map](map-add-shape.md#add-a-circle-to-the-map)-- [Polygon layer options](/javascript/api/azure-maps-control/atlas.polygonlayeroptions)-- [Line layer options](/javascript/api/azure-maps-control/atlas.linelayeroptions)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add a polygon to the map]
+* [Add a circle to the map]
+* [Polygon layer options]
+* [Line layer options]
+* [Use data-driven style expressions]
### Display an info window
map.events.add('click', marker, function () {
> [!NOTE] > You may do the same thing with a symbol, bubble, line or polygon layer by passing the chosen layer to the maps event code instead of a marker.
-**Additional resources:**
+**More resources:**
-- [Add a popup](map-add-popup.md)-- [Popup with Media Content](https://samples.azuremaps.com/?sample=popup-with-media-content)-- [Popups on Shapes](https://samples.azuremaps.com/?sample=popups-on-shapes)-- [Reusing Popup with Multiple Pins](https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins)-- [Popup class](/javascript/api/azure-maps-control/atlas.popup)-- [Popup options](/javascript/api/azure-maps-control/atlas.popupoptions)
+* [Add a popup]
+* [Popup with Media Content]
+* [Popups on Shapes]
+* [Reusing Popup with Multiple Pins]
+* [Popup class]
+* [Popup options]
### Import a GeoJSON file
-Google Maps supports loading and dynamically styling GeoJSON data via the `google.maps.Data` class. The functionality of this class aligns much more with the data-driven styling of Azure Maps. But, there's a key difference. With Google Maps, you specify a callback function. The business logic for styling each feature it processed individually in the UI thread. But in Azure Maps, layers support specifying data-driven expressions as styling options. These expressions are processed at render time on a separate thread. The Azure Maps approach improves rendering performance. This advantage is noticed when larger data sets need to be rendered quickly.
+Google Maps supports loading and dynamically styling GeoJSON data via the `google.maps.Data` class. The functionality of this class aligns more with the data-driven styling of Azure Maps. But, there's a key difference. With Google Maps, you specify a callback function. The business logic for styling each feature it processed individually in the UI thread. But in Azure Maps, layers support specifying data-driven expressions as styling options. These expressions are processed at render time on a separate thread. The Azure Maps approach improves rendering performance. This advantage is noticed when larger data sets need to be rendered quickly.
-The following examples load a GeoJSON feed of all earthquakes over the last seven days from the USGS. Earthquakes data renders as scaled circles on the map. The color and scale of each circle is based on the magnitude of each earthquake, which is stored in the `"mag"` property of each feature in the data set. If the magnitude is greater than or equal to five, the circle will be red. If it's greater or equal to three, but less than five, the circle will be orange. If it's less than three, the circle will be green. The radius of each circle will be the exponential of the magnitude multiplied by 0.1.
+The following examples load a GeoJSON feed of all earthquakes over the last seven days from the USGS. Earthquakes data renders as scaled circles on the map. The color and scale of each circle is based on the magnitude of each earthquake, which is stored in the `"mag"` property of each feature in the data set. If the magnitude is greater than or equal to five, the circle is red. If it's greater or equal to three, but less than five, the circle is orange. If it's less than three, the circle is green. The radius of each circle will be the exponential of the magnitude multiplied by 0.1.
#### Before: Google Maps
GeoJSON is the base data type in Azure Maps. Import it into a data source using
![Azure Maps GeoJSON](media/migrate-google-maps-web-app/azure-maps-geojson.png)
-**Additional resources:**
+**More resources:**
-* [Add a Symbol layer](map-add-pin.md)
-* [Add a Bubble layer](map-add-bubble-layer.md)
-* [Cluster point data](clustering-point-data-web-sdk.md)
-* [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add a Symbol layer]
+* [Add a Bubble layer]
+* [Clustering point data in the Web SDK]
+* [Use data-driven style expressions]
### Marker clustering
Add and manage data in a data source. Connect data sources and layers, then rend
* `clusterMaxZoom` - The maximum zoom level in which clustering occurs. If you zoom in more than this level, all points are rendered as symbols. * `clusterProperties` - Defines custom properties that are calculated using expressions against all the points within each cluster and added to the properties of each cluster point.
-When clustering is enabled, the data source will send clustered and unclustered data points to layers for rendering. The data source is capable of clustering hundreds of thousands of data points. A clustered data point has the following properties:
+When clustering is enabled, the data source sends clustered and unclustered data points to layers for rendering. The data source is capable of clustering hundreds of thousands of data points. A clustered data point has the following properties:
| Property name | Type | Description | |||| | `cluster` | boolean | Indicates if feature represents a cluster. | | `cluster_id` | string | A unique ID for the cluster that can be used with the DataSource `getClusterExpansionZoom`, `getClusterChildren`, and `getClusterLeaves` methods. | | `point_count` | number | The number of points the cluster contains. |
-| `point_count_abbreviated` | string | A string that abbreviates the `point_count` value if it is long. (for example, 4,000 becomes 4K) |
+| `point_count_abbreviated` | string | A string that abbreviates the `point_count` value if it's long. (for example, 4,000 becomes 4K) |
The `DataSource` class has the following helper function for accessing additional information about a cluster using the `cluster_id`. | Method | Return type | Description | |--|-|-|
-| `getClusterChildren(clusterId: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters will be features with properties matching ClusteredProperties. |
-| `getClusterExpansionZoom(clusterId: number)` | Promise&lt;number&gt; | Calculates a zoom level at which the cluster will start expanding or break apart. |
+| `getClusterChildren(clusterId: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters are features with properties matching ClusteredProperties. |
+| `getClusterExpansionZoom(clusterId: number)` | Promise&lt;number&gt; | Calculates a zoom level at which the cluster starts expanding or break apart. |
| `getClusterLeaves(clusterId: number, limit: number, offset: number)` | Promise&lt;Array&lt;Feature&lt;Geometry, any&gt; \| Shape&gt;&gt; | Retrieves all points in a cluster. Set the `limit` to return a subset of the points, and use the `offset` to page through the points. |
-When rendering clustered data on the map, it's often best to use two or more layers. The following example uses three layers. A bubble layer for drawing scaled colored circles based on the size of the clusters. A symbol layer to render the cluster size as text. And, it uses a second symbol layer for rendering the unclustered points. There are many other ways to render clustered data. For more information, see the [Cluster point data](clustering-point-data-web-sdk.md) documentation.
+When rendering clustered data on the map, it's often best to use two or more layers. The following example uses three layers. A bubble layer for drawing scaled colored circles based on the size of the clusters. A symbol layer to render the cluster size as text. And, it uses a second symbol layer for rendering the unclustered points. For more information on other ways to render clustered data, see [Clustering point data in the Web SDK].
Directly import GeoJSON data using the `importDataFromUrl` function on the `DataSource` class, inside Azure Maps map.
Directly import GeoJSON data using the `importDataFromUrl` function on the `Data
![Azure Maps clustering](media/migrate-google-maps-web-app/azure-maps-clustering.png)
-**Additional resources:**
+**More resources:**
-* [Add a Symbol layer](map-add-pin.md)
-* [Add a Bubble layer](map-add-bubble-layer.md)
-* [Cluster point data](clustering-point-data-web-sdk.md)
-* [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add a Symbol layer]
+* [Add a Bubble layer]
+* [Clustering point data in the Web SDK]
+* [Use data-driven style expressions]
### Add a heat map
To create a heat map, load the "visualization" library by adding `&libraries=vis
#### After: Azure Maps
-Load the GeoJSON data into a data source and connect the data source to a heat map layer. The property that will be used for the weight can be passed into the `weight` option using an expression. Directly import GeoJSON data to Azure Maps using the `importDataFromUrl` function on the `DataSource` class.
+Load the GeoJSON data into a data source and connect the data source to a heat map layer. The property that is used for the weight can be passed into the `weight` option using an expression. Directly import GeoJSON data to Azure Maps using the `importDataFromUrl` function on the `DataSource` class.
```html <!DOCTYPE html>
Load the GeoJSON data into a data source and connect the data source to a heat m
![Azure Maps heat map](media/migrate-google-maps-web-app/azure-maps-heatmap.png)
-**Additional resources:**
+**More resources:**
-- [Add a heat map layer](map-add-heat-map-layer.md)-- [Heat map layer class](/javascript/api/azure-maps-control/atlas.layer.heatmaplayer)-- [Heat map layer options](/javascript/api/azure-maps-control/atlas.heatmaplayeroptions)-- [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
+* [Add a heat map layer]
+* [Heat map layer class]
+* [Heat map layer options]
+* [Use data-driven style expressions]
### Overlay a tile layer
map.layers.add(new atlas.layer.TileLayer({
> [!TIP] > Tile requests can be captured using the `transformRequest` option of the map. This will allow you to modify or add headers to the request if desired.
-**Additional resources:**
+**More resources:**
-- [Add tile layers](map-add-tile-layer.md)-- [Tile layer class](/javascript/api/azure-maps-control/atlas.layer.tilelayer)-- [Tile layer options](/javascript/api/azure-maps-control/atlas.tilelayeroptions)
+* [Add tile layers]
+* [Tile layer class]
+* [Tile layer options]
### Show traffic data
map.setTraffic({
![Azure Maps traffic](media/migrate-google-maps-web-app/azure-maps-traffic.png)
-If you click on one of the traffic icons in Azure Maps, additional information is displayed in a popup.
+If you select one of the traffic icons in Azure Maps, more information is displayed in a popup.
![Azure Maps traffic incident](media/migrate-google-maps-web-app/azure-maps-traffic-incident.png)
-**Additional resources:**
+**More resources:**
-* [Show traffic on the map](map-show-traffic.md)
-* [Traffic overlay options](https://samples.azuremaps.com/?sample=traffic-overlay-options)
+* [Show traffic on the map]
+* [Traffic overlay options]
### Add a ground overlay
-Both Azure and Google maps support overlaying georeferenced images on the map. Georeferenced images move and scale as you pan and zoom the map. In Google Maps, georeferenced images are known as ground overlays while in Azure Maps they're referred to as image layers. They are great for building floor plans, overlaying old maps, or imagery from a drone.
+Both Azure and Google maps support overlaying georeferenced images on the map. Georeferenced images move and scale as you pan and zoom the map. In Google Maps, georeferenced images are known as ground overlays while in Azure Maps they're referred to as image layers. They're great for building floor plans, overlaying old maps, or imagery from a drone.
#### Before: Google Maps
Specify the URL to the image you want to overlay and a bounding box to bind the
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Google Maps image overlay](media/migrate-google-maps-web-app/google-maps-image-overlay.png)
Running this code in a browser will display a map that looks like the following
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This class requires a URL to an image and a set of coordinates for the four corners of the image. The image must be hosted either on the same domain or have CORs enabled. > [!TIP]
-> If you only have north, south, east, west and rotation information, and you do not have coordinates for each corner of the image, you can use the static [`atlas.layer.ImageLayer.getCoordinatesFromEdges`](/javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-) method.
+> If you only have north, south, east, west and rotation information, and you do not have coordinates for each corner of the image, you can use the static [`atlas.layer.ImageLayer.getCoordinatesFromEdges`] method.
```html <!DOCTYPE html>
Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla
![Azure Maps image overlay](media/migrate-google-maps-web-app/azure-maps-image-overlay.png)
-**Additional resources:**
+**More resources:**
-- [Overlay an image](map-add-image-layer.md)-- [Image layer class](/javascript/api/azure-maps-control/atlas.layer.imagelayer)
+* [Overlay an image]
+* [Image layer class]
### Add KML data to the map
-Both Azure and Google maps can import and render KML, KMZ and GeoRSS data on the map. Azure Maps also supports GPX, GML, spatial CSV files, GeoJSON, Well Known Text (WKT), Web-Mapping Services (WMS), Web-Mapping Tile Services (WMTS), and Web Feature Services (WFS). Azure Maps reads the files locally into memory and in most cases can handle much larger KML files.
+Both Azure and Google maps can import and render KML, KMZ and GeoRSS data on the map. Azure Maps also supports GPX, GML, spatial CSV files, GeoJSON, Well Known Text (WKT), Web-Mapping Services (WMS), Web-Mapping Tile Services (WMTS), and Web Feature Services (WFS). Azure Maps reads the files locally into memory and in most cases can handle larger KML files.
#### Before: Google Maps
Both Azure and Google maps can import and render KML, KMZ and GeoRSS data on the
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Google Maps KML](media/migrate-google-maps-web-app/google-maps-kml.png) #### After: Azure Maps
-In Azure Maps, GeoJSON is the main data format used in the web SDK, additional spatial data formats can be easily integrated in using the [spatial IO module](/javascript/api/azure-maps-spatial-io/). This module has functions for both reading and writing spatial data and also includes a simple data layer which can easily render data from any of these spatial data formats. To read the data in a spatial data file, pass in a URL, or raw data as string or blob into the `atlas.io.read` function. This will return all the parsed data from the file that can then be added to the map. KML is a bit more complex than most spatial data format as it includes a lot more styling information. The `SpatialDataLayer` class supports rendering majority of these styles, however icons images have to be loaded into the map before loading the feature data, and ground overlays have to be added as layers to the map separately. When loading data via a URL, it should be hosted on a CORs enabled endpoint, or a proxy service should be passed in as an option into the read function.
+In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial data formats can be easily integrated in using the [spatial IO module]. This module has functions for both reading and writing spatial data and also includes a simple data layer that can easily render data from any of these spatial data formats. To read the data in a spatial data file, pass in a URL, or raw data as string or blob into the `atlas.io.read` function. This returns all the parsed data from the file that can then be added to the map. KML is a bit more complex than most spatial data format as it includes a lot more styling information. The `SpatialDataLayer` class supports most of these styles, however icons images have to be loaded into the map before loading the feature data, and ground overlays have to be added as layers to the map separately. When loading data via a URL, it should be hosted on a CORs enabled endpoint, or a proxy service should be passed in as an option into the read function.
```javascript <!DOCTYPE html>
In Azure Maps, GeoJSON is the main data format used in the web SDK, additional s
</html> ```
-![Azure Maps KML](media/migrate-google-maps-web-app/azure-maps-kml.png)</center>
+![Azure Maps KML](media/migrate-google-maps-web-app/azure-maps-kml.png)
-**Additional resources:**
+**More resources:**
-- [atlas.io.read function](/javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-)-- [SimpleDataLayer](/javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer)-- [SimpleDataLayerOptions](/javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions)
+* [atlas.io.read function]
+* [SimpleDataLayer]
+* [SimpleDataLayerOptions]
-## Additional code samples
+## More code samples
-The following are some additional code samples related to Google Maps migration:
+The following are some more code samples related to Google Maps migration:
-* [Drawing tools](map-add-drawing-toolbar.md)
-* [Limit Map to Two Finger Panning](https://samples.azuremaps.com/?sample=limit-map-to-two-finger-panning)
-* [Limit Scroll Wheel Zoom](https://samples.azuremaps.com/?sample=limit-scroll-wheel-zoom)
-* [Create a Fullscreen Control](https://samples.azuremaps.com/?sample=fullscreen-control)
+* [Drawing tools]
+* [Limit Map to Two Finger Panning]
+* [Limit Scroll Wheel Zoom]
+* [Create a Fullscreen Control]
**
-* [Using the Azure Maps services module](how-to-use-services-module.md)
-* [Search for points of interest](map-search-location.md)
-* [Get information from a coordinate (reverse geocode)](map-get-information-from-coordinate.md)
-* [Show directions from A to B](map-route.md)
-* [Search Autosuggest with JQuery UI](https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui)
+* [Using the Azure Maps services module]
+* [Search for points of interest]
+* [Get information from a coordinate (reverse geocode)]
+* [Show directions from A to B]
+* [Search Autosuggest with JQuery UI]
## Google Maps V3 to Azure Maps Web SDK class mapping
The following appendix provides a cross reference of the commonly used classes i
| `google.maps.PolygonOptions` |[atlas.layer.PolygonLayer](/javascript/api/azure-maps-control/atlas.layer.polygonlayer)<br/> [atlas.PolygonLayerOptions](/javascript/api/azure-maps-control/atlas.polygonlayeroptions)<br/> [atlas.layer.LineLayer](/javascript/api/azure-maps-control/atlas.layer.linelayer)<br/> [atlas.LineLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions)| | `google.maps.Polyline` | [atlas.data.LineString](/javascript/api/azure-maps-control/atlas.data.linestring) | | `google.maps.PolylineOptions` | [atlas.layer.LineLayer](/javascript/api/azure-maps-control/atlas.layer.linelayer)<br/>[atlas.LineLayerOptions](/javascript/api/azure-maps-control/atlas.linelayeroptions) |
-| `google.maps.Circle` | See [Add a circle to the map](map-add-shape.md#add-a-circle-to-the-map) |
+| `google.maps.Circle` | See [Add a circle to the map] |
| `google.maps.ImageMapType` | [atlas.TileLayer](/javascript/api/azure-maps-control/atlas.layer.tilelayer) | | `google.maps.ImageMapTypeOptions` | [atlas.TileLayerOptions](/javascript/api/azure-maps-control/atlas.tilelayeroptions) | | `google.maps.GroundOverlay` | [atlas.layer.ImageLayer](/javascript/api/azure-maps-control/atlas.layer.imagelayer)<br/>[atlas.ImageLayerOptions](/javascript/api/azure-maps-control/atlas.imagelayeroptions) |
The Azure Maps Web SDK includes a services module, which can be loaded separatel
| `google.maps.GeocoderRequest` | [atlas.SearchAddressOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressoptions)<br/>[atlas.SearchAddressRevrseOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressreverseoptions)<br/>[atlas.SearchAddressReverseCrossStreetOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressreversecrossstreetoptions)<br/>[atlas.SearchAddressStructuredOptions](/javascript/api/azure-maps-rest/atlas.service.searchaddressstructuredoptions)<br/>[atlas.SearchAlongRouteOptions](/javascript/api/azure-maps-rest/atlas.service.searchalongrouteoptions)<br/>[atlas.SearchFuzzyOptions](/javascript/api/azure-maps-rest/atlas.service.searchfuzzyoptions)<br/>[atlas.SearchInsideGeometryOptions](/javascript/api/azure-maps-rest/atlas.service.searchinsidegeometryoptions)<br/>[atlas.SearchNearbyOptions](/javascript/api/azure-maps-rest/atlas.service.searchnearbyoptions)<br/>[atlas.SearchPOIOptions](/javascript/api/azure-maps-rest/atlas.service.searchpoioptions)<br/>[atlas.SearchPOICategoryOptions](/javascript/api/azure-maps-rest/atlas.service.searchpoicategoryoptions) | | `google.maps.DirectionsService` | [atlas.service.RouteUrl](/javascript/api/azure-maps-rest/atlas.service.routeurl) | | `google.maps.DirectionsRequest` | [atlas.CalculateRouteDirectionsOptions](/javascript/api/azure-maps-rest/atlas.service.calculateroutedirectionsoptions) |
-| `google.maps.places.PlacesService` | [atlas.service.SearchUrl](/javascript/api/azure-maps-rest/atlas.service.searchurl) |
+| `google.maps.places.PlacesService` | [f](/javascript/api/azure-maps-rest/atlas.service.searchurl) |
## Libraries
-Libraries add additional functionality to the map. Many of these libraries are in
+Libraries add more functionality to the map. Many of these libraries are in
the core SDK of Azure Maps. Here are some equivalent classes to use in place of these Google Maps libraries
Learn more about migrating to Azure Maps:
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [free account]: https://azure.microsoft.com/free/ [manage authentication in Azure Maps]: how-to-manage-authentication.md+
+[road tiles]: /rest/api/maps/render/getmaptile
+[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+
+[Cesium documentation]: https://www.cesium.com/
+[Leaflet code sample]: https://samples.azuremaps.com/?sample=render-azure-maps-in-leaflet
+[Leaflet documentation]: https://leafletjs.com/
+[OpenLayers documentation]: https://openlayers.org/
+
+[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps
+[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components
+[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps
+[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
+
+[*atlas.data* namespace]: /javascript/api/azure-maps-control/atlas.data
+[*atlas.Shape*]: /javascript/api/azure-maps-control/atlas.shape
+[atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
+
+[npm module]: how-to-use-map-control.md
+
+[Load a map]: #load-a-map
+[Localizing the map]: #localizing-the-map
+[Setting the map view]: #setting-the-map-view
+[Adding a marker]: #adding-a-marker
+[Adding a custom marker]: #adding-a-custom-marker
+[Adding a polyline]: #adding-a-polyline
+[Adding a polygon]: #adding-a-polygon
+[Display an info window]: #display-an-info-window
+[Import a GeoJSON file]: #import-a-geojson-file
+[Marker clustering]: #marker-clustering
+[Add a heat map]: #add-a-heat-map
+[Overlay a tile layer]: #overlay-a-tile-layer
+[Show traffic data]: #show-traffic-data
+[Add a ground overlay]: #add-a-ground-overlay
+[Add KML data to the map]: #add-kml-data-to-the-map
+
+[Use the Azure Maps map control]: how-to-use-map-control.md
+[Add controls to a map]: map-add-controls.md
+[Localization support in Azure Maps]: supported-languages.md
+
+[Choose a map style]: choose-map-style.md
+[Supported map styles]: supported-map-styles.md
+
+[Create a data source]: create-data-source-web-sdk.md
+[Add a Symbol layer]: map-add-pin.md
+[Add a Bubble layer]: map-add-bubble-layer.md
+[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md
+[Add HTML Markers]: map-add-custom-html.md
+[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[Symbol layer icon options]: /javascript/api/azure-maps-control/atlas.iconoptions
+[Symbol layer text option]: /javascript/api/azure-maps-control/atlas.textoptions
+[HTML marker class]: /javascript/api/azure-maps-control/atlas.htmlmarker
+[HTML marker options]: /javascript/api/azure-maps-control/atlas.htmlmarkeroptions
+
+[Add lines to the map]: map-add-line-layer.md
+[Line layer options]: /javascript/api/azure-maps-control/atlas.linelayeroptions
+
+[Add a polygon to the map]: map-add-shape.md
+[Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
+[Polygon layer options]: /javascript/api/azure-maps-control/atlas.polygonlayeroptions
+
+[Add a popup]: map-add-popup.md
+[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content
+[Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes
+[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins
+[Popup class]: /javascript/api/azure-maps-control/atlas.popup
+[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
+[spatial IO module]: /javascript/api/azure-maps-spatial-io/
+
+[Add a heat map layer]: map-add-heat-map-layer.md
+[Heat map layer class]: /javascript/api/azure-maps-control/atlas.layer.heatmaplayer
+[Heat map layer options]: /javascript/api/azure-maps-control/atlas.heatmaplayeroptions
+
+[Add tile layers]: map-add-tile-layer.md
+[Tile layer class]: /javascript/api/azure-maps-control/atlas.layer.tilelayer
+[Tile layer options]: /javascript/api/azure-maps-control/atlas.tilelayeroptions
+
+[Show traffic on the map]: map-show-traffic.md
+[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options
+
+[`atlas.layer.ImageLayer.getCoordinatesFromEdges`]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-
+[Overlay an image]: map-add-image-layer.md
+[Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
+
+[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-
+[SimpleDataLayer]: /javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer
+[SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions
+[Drawing tools]: map-add-drawing-toolbar.md
+[Limit Map to Two Finger Panning]: https://samples.azuremaps.com/?sample=limit-map-to-two-finger-panning
+[Limit Scroll Wheel Zoom]: https://samples.azuremaps.com/?sample=limit-scroll-wheel-zoom
+[Create a Fullscreen Control]: https://samples.azuremaps.com/?sample=fullscreen-control
+[Using the Azure Maps services module]: how-to-use-services-module.md
+[Search for points of interest]: map-search-location.md
+[Get information from a coordinate (reverse geocode)]: map-get-information-from-coordinate.md
+[Show directions from A to B]: map-route.md
+[Search Autosuggest with JQuery UI]: https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Both Azure and Google Maps provide access to spatial APIs through REST web services. The API interfaces of these platforms perform similar functionalities. But, they each use different naming conventions and response objects.
-In this tutorial, you will learn how to:
+This tutorial demonstrates how to:
> [!div class="checklist"] > * Forward and reverse geocoding
In this tutorial, you will learn how to:
> * Calculate a distance matrix > * Get time zone details
-You will also learn:
+You'll also learn:
> [!div class="checklist"] > * Which Azure Maps REST service when migrating from a Google Maps Web Service
You will also learn:
The table shows the Azure Maps service APIs, which have a similar functionality to the listed Google Maps service APIs.
-| Google Maps service API | Azure Maps service API |
-|-|--|
-| Directions | [Route](/rest/api/maps/route) |
-| Distance Matrix | [Route Matrix](/rest/api/maps/route/postroutematrixpreview) |
-| Geocoding | [Search](/rest/api/maps/search) |
-| Places Search | [Search](/rest/api/maps/search) |
-| Place Autocomplete | [Search](/rest/api/maps/search) |
-| Snap to Road | See [Calculate routes and directions](#calculate-routes-and-directions) section. |
-| Speed Limits | See [Reverse geocode a coordinate](#reverse-geocode-a-coordinate) section. |
-| Static Map | [Render](/rest/api/maps/render/getmapimage) |
-| Time Zone | [Time Zone](/rest/api/maps/timezone) |
-| Elevation | [Elevation](/rest/api/maps/elevation)<sup>1</sup> |
-
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+| Google Maps service API | Azure Maps service API |
+|-||
+| Directions | [Route] |
+| Distance Matrix | [Route Matrix] |
+| Geocoding | [Search] |
+| Places Search | [Search] |
+| Place Autocomplete | [Search] |
+| Snap to Road | See [Calculate routes and directions] section. |
+| Speed Limits | See [Reverse geocode a coordinate] section. |
+| Static Map | [Render] |
+| Time Zone | [Time Zone] |
+| Elevation | [Elevation]<sup>1</sup> |
+
+<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information on how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
The following service APIs aren't currently available in Azure Maps: -- Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but does not currently support cell tower or WiFi triangulation.-- Places details and photos - Phone numbers and website URL are available in the Azure Maps search API.-- Map URLs-- Nearest Roads - This is achievable using the Web SDK as shown [here](https://samples.azuremaps.com/?sample=basic-snap-to-road-logic), but not available as a service currently.-- Static street view
+* Geolocation - Azure Maps does have a service called Geolocation, but it provides IP Address to location information, but doesn't currently support cell tower or WiFi triangulation.
+* Places details and photos - Phone numbers and website URL are available in the Azure Maps search API.
+* Map URLs
+* Nearest Roads - This is achievable using the Web SDK as demonstrated in the [Basic snap to road logic] sample, but is not currently available as a service.
+* Static street view
Azure Maps has several other REST web services that may be of interest: -- [Spatial operations](/rest/api/maps/spatial): Offload complex spatial calculations and operations, such as geofencing, to a service.-- [Traffic](/rest/api/maps/traffic): Access real-time traffic flow and incident data.
+* [Spatial operations]: Offload complex spatial calculations and operations, such as geofencing, to a service.
+* [Traffic]: Access real-time traffic flow and incident data.
## Prerequisites
Geocoding is the process of converting an address into a coordinate. For example
Azure Maps provides several methods for geocoding addresses: -- [**Free-form address geocoding**](/rest/api/maps/search/getsearchaddress): Specify a single address string and process the request immediately. "1 Microsoft way, Redmond, WA" is an example of a single address string. This API is recommended if you need to geocode individual addresses quickly.-- [**Structured address geocoding**](/rest/api/maps/search/getsearchaddressstructured): Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This API is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.-- [**Batch address geocoding**](/rest/api/maps/search/postsearchaddressbatchpreview): Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses will be geocoded in parallel on the server and when completed the full result set can be downloaded. This is recommended for geocoding large data sets.-- [**Fuzzy search**](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string. This string can be an address, place, landmark, point of interest, or point of interest category. This API process the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.-- [**Fuzzy batch search**](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* **[Free-form address geocoding]**: Specify a single address string and process the request immediately. "1 Microsoft way, Redmond, WA" is an example of a single address string. This API is recommended if you need to geocode individual addresses quickly.
+* **[Structured address geocoding]**: Specify the parts of a single address, such as the street name, city, country/region, and postal code and process the request immediately. This API is recommended if you need to geocode individual addresses quickly and the data is already parsed into its individual address parts.
+* **[Batch address geocoding]**: Create a request containing up to 10,000 addresses and have them processed over a period of time. All the addresses are geocoded in parallel on the server and when completed the full result set can be downloaded. This is recommended for geocoding large data sets.
+* **[Fuzzy search]**: This API combines address geocoding with point of interest search. This API takes in a free-form string. This string can be an address, place, landmark, point of interest, or point of interest category. This API process the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.
+* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The following table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
The following table cross-references the Google Maps API parameters with the com
| `address` | `query` | | `bounds` | `topLeft` and `btmRight` | | `components` | `streetNumber`<br/>`streetName`<br/>`crossStreet`<br/>`postalCode`<br/>`municipality` - city / town<br/>`municipalitySubdivision` ΓÇô neighborhood, sub / super city<br/>`countrySubdivision` - state or province<br/>`countrySecondarySubdivision` - county<br/>`countryTertiarySubdivision` - district<br/>`countryCode` - two letter country/region code |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `region` | `countrySet` |
-An example of how to use the search service is documented [here](how-to-search-for-address.md). Be sure to review [best practices for search](how-to-use-best-practices-for-search.md).
+For more information on using the search service, see [Search for a location using Azure Maps Search services]. Be sure to review [best practices for search].
> [!TIP] > The free-form address geocoding and fuzzy search APIs can be used in autocomplete mode by adding `&typeahead=true` to the request URL. This will tell the server that the input text is likely partial, and the search will go into predictive mode.
Reverse geocoding is the process of converting geographic coordinates into an ap
Azure Maps provides several reverse geocoding methods: -- [**Address reverse geocoder**](/rest/api/maps/search/getsearchaddressreverse): Specify a single geographic coordinate to get the approximate address corresponding to this coordinate. Processes the request near real time.-- [**Cross street reverse geocoder**](/rest/api/maps/search/getsearchaddressreversecrossstreet): Specify a single geographic coordinate to get nearby cross street information and process the request immediately. For example, you may receive the following cross streets 1st Ave and Main St.-- [**Batch address reverse geocoder**](/rest/api/maps/search/postsearchaddressreversebatchpreview): Create a request containing up to 10,000 coordinates and have them processed over a period of time. All data will be processed in parallel on the server. When the request completes, you can download the full set of results.
+* **[Address reverse geocoder]**: Specify a single geographic coordinate to get the approximate address corresponding to this coordinate. Processes the request near real time.
+* **[Cross street reverse geocoder]**: Specify a single geographic coordinate to get nearby cross street information and process the request immediately. For example, you may receive the following cross streets 1st Ave and Main St.
+* **[Batch address reverse geocoder]**: Create a request containing up to 10,000 coordinates and have them processed over a period of time. All data is processed in parallel on the server. When the request completes, you can download the full set of results.
This table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps. | Google Maps API parameter | Comparable Azure Maps API parameter | |--||
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `latlng` | `query` | | `location_type` | *N/A* | | `result_type` | `entityType` |
-Review [best practices for search](how-to-use-best-practices-for-search.md).
+For more information, see [best practices for search].
The Azure Maps reverse geocoding API has some other features, which aren't available in Google Maps. These features might be useful to integrate with your application, as you migrate your app:
Point of interest data can be searched in Google Maps using the Places Search AP
Azure Maps provides several search APIs for points of interest: -- [**POI search**](/rest/api/maps/search/getsearchpoi): Search for points of interests by name. For example, "Starbucks".-- [**POI category search**](/rest/api/maps/search/getsearchpoicategory): Search for points of interests by category. For example, "restaurant".-- [**Nearby search**](/rest/api/maps/search/getsearchnearby): Searches for points of interests that are within a certain distance of a location.-- [**Fuzzy search**](/rest/api/maps/search/getsearchfuzzy): This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category. It processes the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.-- [**Search within geometry**](/rest/api/maps/search/postsearchinsidegeometry): Search for points of interests that are within a specified geometry. For example, search a point of interest within a polygon.-- [**Search along route**](/rest/api/maps/search/postsearchalongroute): Search for points of interests that are along a specified route path.-- [**Fuzzy batch search**](/rest/api/maps/search/postsearchfuzzybatchpreview): Create a request containing up to 10,000 addresses, places, landmarks, or point of interests. Processed the request over a period of time. All data will be processed in parallel on the server. When the request completes processing, you can download the full set of result.
+* **[POI search]**: Search for points of interests by name. For example, "Starbucks".
+* **[POI category search]**: Search for points of interests by category. For example, "restaurant".
+* **[Nearby search]**: Searches for points of interests that are within a certain distance of a location.
+* **[Fuzzy search]**: This API combines address geocoding with point of interest search. This API takes in a free-form string that can be an address, place, landmark, point of interest, or point of interest category. It processes the request near real time. This API is recommended for applications where users search for addresses or points of interest in the same textbox.
+* **[Search within geometry]**: Search for points of interests that are within a specified geometry. For example, search a point of interest within a polygon.
+* **[Search along route]**: Search for points of interests that are along a specified route path.
+* **[Fuzzy batch search]**: Create a request containing up to 10,000 addresses, places, landmarks, or point of interests. Processed the request over a period of time. All data is processed in parallel on the server. When the request completes processing, you can download the full set of result.
Currently Azure Maps doesn't have a comparable API to the Text Search API in Google Maps. > [!TIP] > The POI search, POI category search, and fuzzy search APIs can be used in autocomplete mode by adding `&typeahead=true` to the request URL. This will tell the server that the input text is likely partial.The API will conduct the search in predictive mode.
-Review the [best practices for search](how-to-use-best-practices-for-search.md) documentation.
+For more information, see [best practices for search].
### Find place from text
-Use the Azure Maps [POI search](/rest/api/maps/search/getsearchpoi) and [Fuzzy search](/rest/api/maps/search/getsearchfuzzy) to search for points of interests by name or address.
+Use the Azure Maps [POI search] and [Fuzzy search] to search for points of interests by name or address.
The table cross-references the Google Maps API parameters with the comparable Azure Maps API parameters.
The table cross-references the Google Maps API parameters with the comparable Az
| `fields` | *N/A* | | `input` | `query` | | `inputtype` | *N/A* |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `locationbias` | `lat`, `lon` and `radius`<br/>`topLeft` and `btmRight`<br/>`countrySet` | ### Nearby search
-Use the [Nearby search](/rest/api/maps/search/getsearchnearby) API to retrieve nearby points of interests, in Azure Maps.
+Use the [Nearby search] API to retrieve nearby points of interests, in Azure Maps.
The table shows the Google Maps API parameters with the comparable Azure Maps API parameters. | Google Maps API parameter | Comparable Azure Maps API parameter | ||--|
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
| `keyword` | `categorySet` and `brandSet` |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `location` | `lat` and `lon` | | `maxprice` | *N/A* | | `minprice` | *N/A* |
The table shows the Google Maps API parameters with the comparable Azure Maps AP
| `pagetoken` | `ofs` and `limit` | | `radius` | `radius` | | `rankby` | *N/A* |
-| `type` | `categorySet ΓÇô` See [supported search categories](supported-search-categories.md) documentation. |
+| `type` | `categorySet ΓÇô` For more information, see [supported search categories]. |
## Calculate routes and directions
Calculate routes and directions using Azure Maps. Azure Maps has many of the sam
The Azure Maps routing service provides the following APIs for calculating routes: -- [**Calculate route**](/rest/api/maps/route/getroutedirections): Calculate a route and have the request processed immediately. This API supports both GET and POST requests. POST requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The POST Route Direction in Azure Maps has an option can that take in thousands of [supporting points](/rest/api/maps/route/postroutedirections#supportingpoints) and will use them to recreate a logical route path between them (snap to road). -- [**Batch route**](/rest/api/maps/route/postroutedirectionsbatchpreview): Create a request containing up to 1,000 route request and have them processed over a period of time. All the data will be processed in parallel on the server and when completed the full result set can be downloaded.
+* **[Calculate route]**: Calculate a route and have the request processed immediately. This API supports both `GET` and `POST` requests. `POST` requests are recommended when specifying a large number of waypoints or when using lots of the route options to ensure that the URL request doesn't become too long and cause issues. The `POST` Route Direction in Azure Maps has an option can that take in thousands of [supporting points] and use them to recreate a logical route path between them (snap to road).
+* **[Batch route]**: Create a request containing up to 1,000 route request and have them processed over a period of time. All the data is processed in parallel on the server and when completed the full result set can be downloaded.
The table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps.
The table cross-references the Google Maps API parameters with the comparable AP
| `avoid` | `avoid` | | `departure_time` | `departAt` | | `destination` | `query` – coordinates in the format `"lat0,lon0:lat1,lon1…."` |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `mode` | `travelMode` | | `optimize` | `computeBestOrder` | | `origin` | `query` |
Azure Maps routing API has other features that aren't available in Google Maps.
* Support commercial vehicle route parameters. Such as, vehicle dimensions, weight, number of axels, and cargo type. * Specify maximum vehicle speed.
-In addition, the route service in Azure Maps supports [calculating routable ranges](/rest/api/maps/route/getrouterange). Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
+In addition, the route service in Azure Maps supports [calculating routable ranges]. Calculating routable ranges is also known as isochrones. It entails generating a polygon covering an area that can be traveled to in any direction from an origin point. All under a specified amount of time or amount of fuel or charge.
-Review the [best practices for routing](how-to-use-best-practices-for-routing.md) documentation.
+For more information, see [best practices for routing].
## Retrieve a map image
-Azure Maps provides an API for rendering the static map images with data overlaid. The [Map image render](/rest/api/maps/render/getmapimagerytile) API in Azure Maps is comparable to the static map API in Google Maps.
+Azure Maps provides an API for rendering the static map images with data overlaid. The [Map image render] API in Azure Maps is comparable to the static map API in Google Maps.
> [!NOTE] > Azure Maps requires the center, all the marker, and the path locations to be coordinates in "longitude,latitude" format. Whereas, Google Maps uses the "latitude,longitude" format. Addresses will need to be geocoded first.
The table cross-references the Google Maps API parameters with the comparable AP
||--| | `center` | `center` | | `format` | `format` ΓÇô specified as part of URL path. Currently only PNG supported. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `maptype` | `layer` and `style` ΓÇô See [Supported map styles](supported-map-styles.md) documentation. | | `markers` | `pins` | | `path` | `path` |
The table cross-references the Google Maps API parameters with the comparable AP
> [!NOTE] > In the Azure Maps tile system, tiles are twice the size of map tiles used in Google Maps. As such the zoom level value in Azure Maps will appear one zoom level closer in Azure Maps compared to Google Maps. To compensate for this difference, decrement the zoom level in the requests you are migrating.
-For more information, see the [How-to guide on the map image render API](how-to-render-custom-data.md).
+For more information, see [Render custom data on a raster map].
In addition to being able to generate a static map image, the Azure Maps render service provides the ability to directly access map tiles in raster (PNG) and vector format: -- [**Map tile**](/rest/api/maps/render/getmaptile): Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).-- [**Map imagery tile**](/rest/api/maps/render/getmapimagerytile): Retrieve aerial and satellite imagery tiles.
+* **[Map tile]**: Retrieve raster (PNG) and vector tiles for the base maps (roads, boundaries, background).
+* **[Map imagery tile]**: Retrieve aerial and satellite imagery tiles.
> [!TIP]
-> Many Google Maps applications where switched from interactive map experiences to static map images a few years ago. This was done as a cost saving method. In Azure Maps, it is usually more cost effective to use the interactive map control in the Web SDK. The interactive map control charges based the number of tile loads. Map tiles in Azure Maps are large. Often, it takes only a few tiles to recreate the same map view as a static map. Map tiles are cached automatically by the browser. As such, the interactive map control often generates a fraction of a transaction when reproducing a static map view. Panning and zooming will load more tiles; however, there are options in the map control to disable this behavior. The interactive map control also provides a lot more visualization options than the static map services.
+> Many Google Maps applications were switched from interactive map experiences to static map images a few years ago. This was done as a cost saving method. In Azure Maps, it is usually more cost effective to use the interactive map control in the Web SDK. The interactive map control charges based the number of tile loads. Map tiles in Azure Maps are large. Often, it takes only a few tiles to recreate the same map view as a static map. Map tiles are cached automatically by the browser. As such, the interactive map control often generates a fraction of a transaction when reproducing a static map view. Panning and zooming will load more tiles; however, there are options in the map control to disable this behavior. The interactive map control also provides a lot more visualization options than the static map services.
### Marker URL parameter format comparison
In Azure Maps, the pin location needs to be in the "longitude latitude" format.
The `iconType` specifies the type of pin to create. It can have the following values: * `default` ΓÇô The default pin icon.
-* `none` ΓÇô No icon is displayed, only labels will be rendered.
+* `none` ΓÇô No icon is displayed, only labels are rendered.
* `custom` ΓÇô Specifies a custom icon is to be used. A URL pointing to the icon image can be added to the end of the `pins` parameter after the pin location information. * `{udid}` ΓÇô A Unique Data ID (UDID) for an icon stored in the Azure Maps Data Storage platform.
Add path styles with the `optionName:value` format, separate multiple styles by
* `geodesic` ΓÇô Indicates if the path should be a line that follows the curvature of the earth. * `weight` ΓÇô The thickness of the path line in pixels.
-Add a red line opacity and pixel thickness to the map between the coordinates, in the URL parameter. For the example below, the line has a 50% opacity and a thickness of four pixels. The coordinates are longitude: -110, latitude: 45 and longitude: -100, latitude: 50.
+Add a red line opacity and pixel thickness to the map between the coordinates, in the URL parameter. For the following example, the line has a 50% opacity and a thickness of four pixels. The coordinates are longitude: -110, latitude: 45 and longitude: -100, latitude: 50.
```text &path=color:0xFF000088|weight:4|45,-110|50,-100
Add lines and polygons to a static map image by specifying the `path` parameter
&path=pathStyles||pathLocation1|pathLocation2|... ```
-When it comes to path locations, Azure Maps requires the coordinates to be in "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format. Azure Maps doesn't support encoded paths or addresses for points. Upload larger data sets as a GeoJSON file into the Azure Maps Data Storage API as documented [here](how-to-render-custom-data.md#upload-pins-and-path-data).
+When it comes to path locations, Azure Maps requires the coordinates to be in "longitude latitude" format. Google Maps uses "latitude,longitude" format. A space, not a comma, separates longitude and latitude in the Azure Maps format. Azure Maps doesn't support encoded paths or addresses for points. For more information on how to Upload larger data sets as a GeoJSON file into the Azure Maps Data Storage API, see [Upload pins and path data].
Add path styles with the `optionNameValue` format. Separate multiple styles by pipe (\|) characters, like this `optionName1Value1|optionName2Value2`. The option names and values aren't separated. Use the following style option names to style paths in Azure Maps:
Add path styles with the `optionNameValue` format. Separate multiple styles by p
* `lw` ΓÇô The width of the line in pixels. * `ra` ΓÇô Specifies a circles radius in meters.
-Add a red line opacity and pixel thickness between the coordinates, in the URL parameter. For the example below, the line has 50% opacity and a thickness of four pixels. The coordinates have the following values: longitude: -110, latitude: 45 and longitude: -100, latitude: 50.
+Add a red line opacity and pixel thickness between the coordinates, in the URL parameter. For the following example, the line has 50% opacity and a thickness of four pixels. The coordinates have the following values: longitude: -110, latitude: 45 and longitude: -100, latitude: 50.
```text &path=lcFF0000|la.5|lw4||-110 45|-100 50
Add a red line opacity and pixel thickness between the coordinates, in the URL p
Azure Maps provides the distance matrix API. Use this API to calculate the travel times and the distances between a set of locations, with a distance matrix. It's comparable to the distance matrix API in Google Maps. -- [**Route matrix**](/rest/api/maps/route/postroutematrixpreview): Asynchronously calculates travel times and distances for a set of origins and destinations. Supports up to 700 cells per request. That's the number of origins multiplied by the number of destinations. With that constraint in mind, examples of possible matrix dimensions are: 700x1, 50x10, 10x10, 28x25, 10x70.
+* **[Route matrix]**(/rest/api/maps/route/postroutematrixpreview): Asynchronously calculates travel times and distances for a set of origins and destinations. Supports up to 700 cells per request. That's the number of origins multiplied by the number of destinations. With that constraint in mind, examples of possible matrix dimensions are: 700x1, 50x10, 10x10, 28x25, 10x70.
> [!NOTE]
-> A request to the distance matrix API can only be made using a POST request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
+> A request to the distance matrix API can only be made using a `POST` request with the origin and destination information in the body of the request. Additionally, Azure Maps requires all origins and destinations to be coordinates. Addresses will need to be geocoded first.
This table cross-references the Google Maps API parameters with the comparable Azure Maps API parameters.
This table cross-references the Google Maps API parameters with the comparable A
| `arrivial_time` | `arriveAt` | | `avoid` | `avoid` | | `depature_time` | `departAt` |
-| `destinations` | `destination` ΓÇô specify in the POST request body as GeoJSON. |
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `destinations` | `destination` ΓÇô specify in the `POST` request body as GeoJSON. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `mode` | `travelMode` |
-| `origins` | `origins` ΓÇô specify in the POST request body as GeoJSON. |
+| `origins` | `origins` ΓÇô specify in the `POST` request body as GeoJSON. |
| `region` | *N/A* ΓÇô This feature is geocoding related. Use the `countrySet` parameter when using the Azure Maps geocoding API. | | `traffic_model` | *N/A* ΓÇô Can only specify if traffic data should be used with the `traffic` parameter. | | `transit_mode` | *N/A* - Transit-based distance matrices aren't currently supported. |
This table cross-references the Google Maps API parameters with the comparable A
> [!TIP] > All the advanced routing options available in the Azure Maps routing API are supported in the Azure Maps distance matrix API. Advanced routing options include: truck routing, engine specifications, and so on.
-Review the [best practices for routing](how-to-use-best-practices-for-routing.md) documentation.
+For more information, see [best practices for routing].
## Get a time zone Azure Maps provides an API for retrieving the time zone of a coordinate. The Azure Maps time zone API is comparable to the time zone API in Google Maps: -- [**Time zone by coordinate**](/rest/api/maps/timezone/gettimezonebycoordinates): Specify a coordinate and receive the time zone details of the coordinate.
+* **[Time zone by coordinate]**(/rest/api/maps/timezone/gettimezonebycoordinates): Specify a coordinate and receive the time zone details of the coordinate.
This table cross-references the Google Maps API parameters with the comparable API parameters in Azure Maps. | Google Maps API parameter | Comparable Azure Maps API parameter | |||
-| `key` | `subscription-key` ΓÇô See also the [Authentication with Azure Maps](azure-maps-authentication.md) documentation. |
-| `language` | `language` ΓÇô See [supported languages](supported-languages.md) documentation. |
+| `key` | `subscription-key` ΓÇô For more information, see [Authentication with Azure Maps]. |
+| `language` | `language` ΓÇô For more information, see [Localization support in Azure Maps]. |
| `location` | `query` | | `timestamp` | `timeStamp` | In addition to this API, Azure Maps provides many time zone APIs. These APIs convert the time based on the names or the IDs of the time zone: -- [**Time zone by ID**](/rest/api/maps/timezone/gettimezonebyid): Returns current, historical, and future time zone information for the specified IANA time zone ID.-- [**Time zone Enum IANA**](/rest/api/maps/timezone/gettimezoneenumiana): Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.-- [**Time zone Enum Windows**](/rest/api/maps/timezone/gettimezoneenumwindows): Returns a full list of Windows Time Zone IDs.-- [**Time zone IANA version**](/rest/api/maps/timezone/gettimezoneianaversion): Returns the current IANA version number used by Azure Maps.-- [**Time zone Windows to IANA**](/rest/api/maps/timezone/gettimezonewindowstoiana): Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
+* **[Time zone by ID]**: Returns current, historical, and future time zone information for the specified IANA time zone ID.
+* **[Time zone Enum IANA]**: Returns a full list of IANA time zone IDs. Updates to the IANA service are reflected in the system within one day.
+* **[Time zone Enum Windows]**: Returns a full list of Windows Time Zone IDs.
+* **[Time zone IANA version]**: Returns the current IANA version number used by Azure Maps.
+* **[Time zone Windows to IANA]**: Returns a corresponding IANA ID, given a valid Windows Time Zone ID. Multiple IANA IDs may be returned for a single Windows ID.
## Client libraries Azure Maps provides client libraries for the following programming languages:
-* JavaScript, TypeScript, Node.js ΓÇô [documentation](how-to-use-services-module.md) \| [npm package](https://www.npmjs.com/package/azure-maps-rest)
+* JavaScript, TypeScript, Node.js ΓÇô [documentation] \| [npm package]
These Open-source client libraries are for other programming languages:
-* .NET Standard 2.0 ΓÇô [GitHub project](https://github.com/perfahlen/AzureMapsRestServices) \| [NuGet package](https://www.nuget.org/packages/AzureMapsRestToolkit/)
+* .NET Standard 2.0 ΓÇô [GitHub project] \| [NuGet package]
## Clean up resources
Learn more about Azure Maps REST
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [free account]: https://azure.microsoft.com/free/ [manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Route]: /rest/api/maps/route
+[Route Matrix]: /rest/api/maps/route/postroutematrixpreview
+[Search]: /rest/api/maps/search
+[Calculate routes and directions]: #calculate-routes-and-directions
+[Reverse geocode a coordinate]: #reverse-geocode-a-coordinate
+[Render]: /rest/api/maps/render/getmapimage
+[Time Zone]: /rest/api/maps/timezone
+[Elevation]: /rest/api/maps/elevation
+[Basic snap to road logic]: https://samples.azuremaps.com/?sample=basic-snap-to-road-logic
+[Spatial operations]: /rest/api/maps/spatial
+[Traffic]: /rest/api/maps/traffic
+[Search for a location using Azure Maps Search services]: how-to-search-for-address.md
+[best practices for search]: how-to-use-best-practices-for-search.md
+
+[Localization support in Azure Maps]: supported-languages.md
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[supported search categories]: supported-search-categories.md
+
+[Free-form address geocoding]: /rest/api/maps/search/getsearchaddress
+[Structured address geocoding]: /rest/api/maps/search/getsearchaddressstructured
+[Batch address geocoding]: /rest/api/maps/search/postsearchaddressbatchpreview
+[Fuzzy search]: /rest/api/maps/search/getsearchfuzzy
+[Fuzzy batch search]: /rest/api/maps/search/postsearchfuzzybatchpreview
+
+[Address reverse geocoder]: /rest/api/maps/search/getsearchaddressreverse
+[Cross street reverse geocoder]: /rest/api/maps/search/getsearchaddressreversecrossstreet
+[Batch address reverse geocoder]: /rest/api/maps/search/postsearchaddressreversebatchpreview
+
+[POI search]: /rest/api/maps/search/getsearchpoi
+[POI category search]: /rest/api/maps/search/getsearchpoicategory
+[Nearby search]: /rest/api/maps/search/getsearchnearby
+[Search within geometry]: /rest/api/maps/search/postsearchinsidegeometry
+[Search along route]: /rest/api/maps/search/postsearchalongroute
+
+[supporting points]: /rest/api/maps/route/postroutedirections#supportingpoints
+[Calculate route]: /rest/api/maps/route/getroutedirections
+[Batch route]: /rest/api/maps/route/postroutedirectionsbatchpreview
+
+[calculating routable ranges]: /rest/api/maps/route/getrouterange
+[best practices for routing]: how-to-use-best-practices-for-routing.md
+[Map image render]: /rest/api/maps/render/getmapimagerytile
+[Render custom data on a raster map]: how-to-render-custom-data.md
+
+[Map tile]: /rest/api/maps/render/getmaptile
+[Map imagery tile]: /rest/api/maps/render/getmapimagerytile
+[Upload pins and path data]: how-to-render-custom-data.md#upload-pins-and-path-data
+[Time zone by ID]: /rest/api/maps/timezone/gettimezonebyid
+[Time zone Enum IANA]: /rest/api/maps/timezone/gettimezoneenumiana
+[Time zone Enum Windows]: /rest/api/maps/timezone/gettimezoneenumwindows
+[Time zone IANA version]: /rest/api/maps/timezone/gettimezoneianaversion
+[Time zone Windows to IANA]: /rest/api/maps/timezone/gettimezonewindowstoiana
+
+[documentation]: how-to-use-services-module.md
+[npm package]: https://www.npmjs.com/package/azure-maps-rest
+[GitHub project]: https://github.com/perfahlen/AzureMapsRestServices
+[NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
# Tutorial: Migrate from Google Maps to Azure Maps
-This article provides insights on how to migrate web, mobile and server-based applications from Google Maps to the Microsoft Azure Maps platform. This tutorial includes comparative code samples, migration suggestions, and best practices for migrating to Azure Maps. In this tutorial, you'll learn:
+This article provides insights on how to migrate web, mobile and server-based applications from Google Maps to the Microsoft Azure Maps platform. This tutorial includes comparative code samples, migration suggestions, and best practices for migrating to Azure Maps. This tutorial demonstrates:
> [!div class="checklist"] > * High-level comparison for equivalent Google Maps features available in Azure Maps.
If you don't have an Azure subscription, create a [free account] before you begi
* A [subscription key] > [!NOTE]
-> For more information on authentication in Azure Maps, see [manage authentication in Azure Maps].
+> For more information on authentication in Azure Maps, see [Manage authentication in Azure Maps].
## Azure Maps platform overview
Azure Maps provides developers from all industries powerful geospatial capabilit
## High-level platform comparison
-The table provides a high-level list of Azure Maps features, which correspond to Google Maps features. This list doesn't show all Azure Maps features. Additional Azure Maps features include: accessibility, geofencing, isochrones, spatial operations, direct map tile access, batch services, and data coverage comparisons (that is, imagery coverage).
+The table provides a high-level list of Azure Maps features, which correspond to Google Maps features. This list doesn't show all Azure Maps features. Other Azure Maps features include: accessibility, geofencing, isochrones, spatial operations, direct map tile access, batch services, and data coverage comparisons (that is, imagery coverage).
| Google Maps feature | Azure Maps support | |--|:--:|
The table provides a high-level list of Azure Maps features, which correspond to
| Maps Embedded API | N/A | | Map URLs | N/A |
-<sup>1</sup> Azure Maps [Elevation services](/rest/api/maps/elevation) have been [deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023). For more information how to include this functionality in your Azure Maps, see [Create elevation data & services](elevation-data-services.md).
+<sup>1</sup> Azure Maps [Elevation services] have been [deprecated]. For more information how to include this functionality in your Azure Maps, see [Create elevation data & services].
Google Maps provides basic key-based authentication. Azure Maps provides both basic key-based authentication and Azure Active Directory authentication. Azure Active Directory authentication provides more security features, compared to the basic key-based authentication.
When migrating to Azure Maps from Google Maps, consider the following points abo
* Azure Maps charges for the usage of interactive maps, which is based on the number of loaded map tiles. On the other hand, Google Maps charges for loading the map control. In the interactive Azure Maps SDKs, map tiles are automatically cached to reduce the development cost. One Azure Maps transaction is generated for every 15 map tiles that are loaded. The interactive Azure Maps SDKs uses 512-pixel tiles, and on average, it generates one or less transactions per page view. * Often, it's more cost effective to replace static map images from Google Maps web services with the Azure Maps Web SDK. The Azure Maps Web SDK uses map tiles. Unless the user pans and zooms the map, the service often generates only a fraction of a transaction per map load. The Azure Maps web SDK has options for disabling panning and zooming, if desired. Additionally, the Azure Maps web SDK provides a lot more visualization options than the static map web service.
-* Azure Maps allows data from its platform to be stored in Azure. Also, data can be cached elsewhere for up to six months as per the [terms of use](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46).
+* Azure Maps allows data from its platform to be stored in Azure. Also, data can be cached elsewhere for up to six months as per the [terms of use].
Here are some related resources for Azure Maps:
-* [Azure Maps pricing page](https://azure.microsoft.com/pricing/details/azure-maps/)
-* [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=azure-maps)
-* [Azure Maps term of use](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46)
- (included in the Microsoft Online Services Terms)
-* [Choose the right pricing tier in Azure Maps](./choose-pricing-tier.md)
+* [Azure Maps pricing page]
+* [Azure pricing calculator]
+* [Choose the right pricing tier in Azure Maps]
+* [Azure Maps term of use] - included in the Microsoft Online Services Terms.
## Suggested migration plan
-The following is a high-level migration plan.
+A high-level migration plan includes.
1. Take inventory of the Google Maps SDKs and services that your application uses. Verify that Azure Maps provides alternative SDKs and services.
-2. If you don't already have one, create an Azure subscription at [https://azure.com](https://azure.com).
-3. Create an Azure Maps account ([documentation](./how-to-manage-account-keys.md)) and authentication key or Azure Active Directory ([documentation](./how-to-manage-authentication.md)).
+2. If you don't already have one, create an [Azure subscription].
+3. Create an [Azure Maps account] and [subscription key] or [Azure Active Directory authentication].
4. Migrate your application code. 5. Test your migrated application. 6. Deploy your migrated application to production.
The following is a high-level migration plan.
To create an Azure Maps account and get access to the Azure Maps platform, follow these steps:
-1. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. Sign in to the [Azure portal](https://portal.azure.com/).
-3. Create an [Azure Maps account](./how-to-manage-account-keys.md).
-4. [Get your Azure Maps subscription key](./how-to-manage-authentication.md#view-authentication-details) or setup Azure Active Directory authentication for enhanced security.
+1. If you don't have an Azure subscription, create a [free account] before you begin.
+2. Sign in to the [Azure portal].
+3. Create an [Azure Maps account].
+4. Get your Azure Maps [subscription key] or [Azure Active Directory authentication] for enhanced security.
## Azure Maps technical resources Here's a list of useful technical resources for Azure Maps. -- Overview: [https://azure.com/maps](https://azure.com/maps)-- Documentation: [https://aka.ms/AzureMapsDocs](./index.yml)-- Web SDK Code Samples: [https://aka.ms/AzureMapsSamples](https://aka.ms/AzureMapsSamples)-- Developer Forums: [https://aka.ms/AzureMapsForums](/answers/topics/azure-maps.html)-- Videos: [https://aka.ms/AzureMapsVideos](/shows/)-- Blog: [https://aka.ms/AzureMapsBlog](https://aka.ms/AzureMapsBlog)-- Tech Blog: [https://aka.ms/AzureMapsTechBlog](https://aka.ms/AzureMapsTechBlog)-- Azure Maps Feedback (UserVoice): [https://aka.ms/AzureMapsFeedback](/answers/topics/25319/azure-maps.html)-- [Azure Maps Jupyter Notebook](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook)
+* [Azure Maps product page]
+* [Azure Maps product documentation]
+* [Azure Maps Web SDK code samples]
+* [Azure Maps developer forums]
+* [Microsoft learning center shows]
+* [Azure Maps Blog]
+* [Azure Maps Q&A]
## Migration support
-Developers can seek migration support through the [forums](/answers/topics/azure-maps.html) or through one of the many Azure support options: [https://azure.microsoft.com/support/options](https://azure.microsoft.com/support/options)
+Developers can seek migration support through the [Azure Maps developer forums] or through one of the many [Azure support options].
## Clean up resources
Learn the details of how to migrate your Google Maps application with these arti
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [free account]: https://azure.microsoft.com/free/
-[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Azure subscription]: https://azure.com
+[Azure portal]: https://portal.azure.com/
+[Manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
+[terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
+[Azure Maps pricing page]: https://azure.microsoft.com/pricing/details/azure-maps/
+[Azure pricing calculator]: https://azure.microsoft.com/pricing/calculator/?service=azure-maps
+[Azure Maps term of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+
+[Azure Maps product page]: https://azure.com/maps
+[Azure Maps product documentation]: https://aka.ms/AzureMapsDocs
+[Azure Maps Web SDK code samples]: https://aka.ms/AzureMapsSamples
+[Azure Maps developer forums]: https://aka.ms/AzureMapsForums
+[Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
+[Azure Maps Blog]: https://aka.ms/AzureMapsBlog
+[Azure Maps Q&A]: https://aka.ms/AzureMapsFeedback
+
+[Azure support options]: https://azure.microsoft.com/support/options
+
+<!->
+[Elevation services]: /rest/api/maps/elevation
+[deprecated]: https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023
+[Create elevation data & services]: elevation-data-services.md
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (preview)
+### [3.0.0-preview.6] (March 31, 2023)
+
+#### Installation (3.0.0-preview.6)
+
+The preview is available on [npm][3.0.0-preview.6] and CDN.
+
+- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.6][3.0.0-preview.6]
+
+- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file:
+
+ ```html
+ <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.6/atlas.min.css" rel="stylesheet" />
+ <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.6/atlas.min.js"></script>
+ ```
+
+#### New features (3.0.0-preview.6)
+
+- Optimized the internal style transform performance.
+
+#### Bug fixes (3.0.0-preview.6)
+
+- Resolved an issue where the first style set request was unauthenticated for `AAD` authentication.
+
+- Eliminated redundant requests during map initialization and on style changed events.
+ ### [3.0.0-preview.5] (March 15, 2023) #### Installation (3.0.0-preview.5)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2 (latest)
+### [2.2.6]
+
+#### Bug fixes (2.2.6)
+
+- Resolved an issue where the first style set request was unauthenticated for `AAD` authentication.
+
+- Eliminated redundant requests during map initialization and on style changed events.
+ ### [2.2.5] #### New features (2.2.5)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.0-preview.6]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.6
[3.0.0-preview.5]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.5 [3.0.0-preview.4]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.4 [3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.2.6]: https://www.npmjs.com/package/azure-maps-control/v/2.2.6
[2.2.5]: https://www.npmjs.com/package/azure-maps-control/v/2.2.5 [2.2.4]: https://www.npmjs.com/package/azure-maps-control/v/2.2.4 [2.2.3]: https://www.npmjs.com/package/azure-maps-control/v/2.2.3
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
The following table summarizes the Azure Maps services that generate transaction
| [Data v1](/rest/api/maps/data)<br>[Data v2](/rest/api/maps/data-v2)<br>[Data registry](/rest/api/maps/data-registry) | Yes, except for MapDataStorageService.GetDataStatus and MapDataStorageService.GetUserData, which are non-billable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| | [Elevation (DEM)](/rest/api/maps/elevation)([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023))| Yes| One request = 2 transactions<br> <ul><li>If requesting elevation for a single point then one request = 1 transaction| <ul><li>Location Insights Elevation (Gen2 pricing)</li><li>Standard S1 Elevation Service Transactions (Gen1 S1 pricing)</li></ul>| | [Geolocation](/rest/api/maps/geolocation)| Yes| One request = 1 transaction| <ul><li>Location Insights Geolocation (Gen2 pricing)</li><li>Standard S1 Geolocation Transactions (Gen1 S1 pricing)</li><li>Standard Geolocation Transactions (Gen1 S0 pricing)</li></ul>|
-| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
+| [Render v1](/rest/api/maps/render)<br>[Render v2](/rest/api/maps/render-v2) | Yes, except for Terra maps (MapTile.GetTerraTile and layer=terra) which are non-billable.|<ul><li>15 tiles = 1 transaction, except microsoft.dem ([deprecated](https://azure.microsoft.com/updates/azure-maps-elevation-apis-and-render-v2-dem-tiles-will-be-retired-on-5-may-2023)) is one tile = 50 transactions</li><li>One request for Get Copyright = 1 transaction</li><li>One request for Get Map Attribution = 1 transaction</li><li>One request for Get Static Map = 1 transaction</li><li>One request for Get Map Tileset = 1 transaction</li></ul> <br> For Creator related usage, see the [Creator table](#azure-maps-creator). |<ul><li>Maps Base Map Tiles (Gen2 pricing)</li><li>Maps Imagery Tiles (Gen2 pricing)</li><li>Maps Static Map Images (Gen2 pricing)</li><li>Maps Traffic Tiles (Gen2 pricing)</li><li>Maps Weather Tiles (Gen2 pricing)</li><li>Standard Hybrid Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard Aerial Imagery Transactions (Gen1 S0 pricing)</li><li>Standard S1 Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Hybrid Aerial Imagery Transactions (Gen1 S1 pricing)</li><li>Standard S1 Rendering Transactions (Gen1 S1 pricing)</li><li>Standard S1 Tile Transactions (Gen1 S1 pricing)</li><li>Standard S1 Weather Tile Transactions (Gen1 S1 pricing)</li><li>Standard Tile Transactions (Gen1 S0 pricing)</li><li>Standard Weather Tile Transactions (Gen1 S0 pricing)</li><li>Maps Copyright (Gen2 pricing, Gen1 S0 pricing and Gen1 S1 pricing)</li></ul>|
| [Route](/rest/api/maps/route) | Yes | One request = 1 transaction<br><ul><li>If using the Route Matrix, each cell in the Route Matrix request generates a billable Route transaction.</li><li>If using Batch Directions, each origin/destination coordinate pair in the Batch request call generates a billable Route transaction. Note, the billable Route transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Routing (Gen2 pricing)</li><li>Standard S1 Routing Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Search v1](/rest/api/maps/search)<br>[Search v2](/rest/api/maps/search-v2) | Yes | One request = 1 transaction.<br><ul><li>If using Batch Search, each location in the Batch request generates a billable Search transaction. Note, the billable Search transaction usage results generated by the batch request will have **-Batch** appended to the API name of your Azure portal metrics report.</li></ul> | <ul><li>Location Insights Search</li><li>Standard S1 Search Transactions (Gen1 S1 pricing)</li><li>Standard Services API Transactions (Gen1 S0 pricing)</li></ul> | | [Spatial](/rest/api/maps/spatial) | Yes, except for `Spatial.GetBoundingBox`, `Spatial.PostBoundingBox` and `Spatial.PostPointInPolygonBatch`, which are non-billable.| One request = 1 transaction.<br><ul><li>If using Geofence, five requests = 1 transaction</li></ul> | <ul><li>Location Insights Spatial Calculations (Gen2 pricing)</li><li>Standard S1 Spatial Transactions (Gen1 S1 pricing)</li></ul> |
azure-monitor Sdk Support Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-support-guidance.md
For more information, review the [Azure SDK Lifecycle and Support Policy](https:
> [!NOTE] > Diagnostic tools often provide better insight into the root cause of a problem when the latest stable SDK version is used.
+## SDK update guidance
+ Support engineers are expected to provide SDK update guidance according to the following table, referencing the current SDK version in use and any alternatives. |Current SDK version in use |Alternative version available |Update policy for support |
Support engineers are expected to provide SDK update guidance according to the f
> [!WARNING] > Only commercially reasonable support is provided for Preview versions of the SDK. If a support incident requires escalation to development for further guidance, customers will be asked to use a fully supported SDK version to continue support. Commercially reasonable support does not include an option to engage Microsoft product development resources; technical workarounds may be limited or not possible.
-To see the current version of Application Insights SDKs and previous versions release dates, reference the [release notes](release-notes.md).
+## Release notes
+
+Reference the release notes to see the current version of Application Insights SDKs and previous versions release dates.
+
+- [.NET SDKs (Including ASP.NET, ASP.NET Core, and Logging Adapters)](https://github.com/Microsoft/ApplicationInsights-dotnet/releases)
+- [Python](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/CHANGELOG.md)
+- [Node.js](https://github.com/Microsoft/ApplicationInsights-node.js/releases)
+- [JavaScript](https://github.com/microsoft/ApplicationInsights-JS/releases)
+
+Our [Service Updates](https://azure.microsoft.com/updates/?service=application-insights) also summarize major Application Insights improvements.
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
The new fields are:
LogSource: string, TimeGenerated: datetime ```+
+>[!NOTE]
+> [Export](../logs/logs-data-export.md) to Event Hub and Storage Account is not supported if the incoming LogMessage is not a valid JSON. For best performance, we recommend emitting container logs in JSON format.
+ ## Enable the ContainerLogV2 schema Customers can enable the ContainerLogV2 schema at the cluster level. To enable the ContainerLogV2 schema, configure the cluster's ConfigMap. Learn more about ConfigMap in [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) and in [Azure Monitor documentation](./container-insights-agent-config.md#configmap-file-settings-overview). Follow the instructions to configure an existing ConfigMap or to use a new one.
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
na Previously updated : 03/07/2023 Last updated : 03/31/2023
The following diagram demonstrates how customer-managed keys work with Azure Net
> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week from submitting waitlist request. * Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption.
-* To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page.
-* Switching from user-assigned identity to the system-assigned identity isn't currently supported.
+* To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in [Set the Network Features option](configure-network-features.md#set-the-network-features-option) to create a volume.
* MSI Automatic certificate renewal isn't currently supported. * The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.** * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message will communicate the date of eligibility.
Before creating your first customer-managed key volume, you must have set up:
* The key vault must have soft delete and purge protection enabled. * The key must be of type RSA. * The key vault must have an [Azure Private Endpoint](../private-link/private-endpoint-overview.md).
+ * You need a private endpoint in each VNet you intend on using for Azure NetApp Files volumes
* The private endpoint must reside in a different subnet than the one delegated to Azure NetApp Files. The subnet must be in the same VNet as the one delegated to Azure NetApp.
+ * The network security group on the Azure NetApp Files delegated subnet must allow incoming traffic from the subnet where the VM mounting Azure NetApp Files volumes is located.
+ * The network security group on the Azure NetApp Files delegated subnet must also allow outgoing traffic to the subnet where the private endpoint is located.
For more information about Azure Key Vault and Azure Private Endpoint, refer to: * [Quickstart: Create a key vault ](../key-vault/general/quick-create-portal.md)
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
* `Microsoft.KeyVault/vaults/keys/decrypt/action` The user-assigned identity you select is added to your NetApp account. Due to the customizable nature of role-based access control (RBAC), the Azure portal doesn't configure access to the key vault. See [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../key-vault/general/rbac-guide.md) for details on configuring Azure Key Vault.
-1. After selecting **Save** button, you'll receive a notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
+1. After selecting the **Save** button, you'll receive a notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
## Use role-based access control
azure-resource-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/cli-samples.md
- Title: Azure CLI samples
-description: Provides Azure CLI sample scripts to use when working with Azure Managed Applications.
-- Previously updated : 10/25/2017---
-# Azure CLI Samples for Azure Managed Applications
-
-The following table includes links to a sample CLI script for Azure Managed Applications.
-
-| Create managed application | Description |
-| -- | -- |
-| [Define and create a managed application](scripts/managed-application-define-create-cli-sample.md) | Creates a managed application definition in the service catalog and then deploys the managed application from the service catalog. |
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
Last updated 03/21/2023+ # Quickstart: Deploy a service catalog managed application
azure-resource-manager Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/powershell-samples.md
- Title: Azure PowerShell samples
-description: Provides Azure PowerShell sample scripts to use when working with Azure Managed Applications.
--- Previously updated : 10/27/2017--
-# Azure PowerShell samples
-
-The following table includes links to scripts for Azure Managed Applications that use the Azure PowerShell.
-
-| Create managed application | Description |
-| -- | -- |
-| [Create managed application definition](scripts/managed-application-powershell-sample-create-definition.md) | Creates a managed application definition in the service catalog. |
-| [Deploy managed application](scripts/managed-application-poweshell-sample-create-application.md) | Deploys a managed application from the service catalog. |
-|**Update managed resource group**| **Description** |
-| [Get resources in managed resource group and resize VMs](scripts/managed-application-powershell-sample-get-managed-group-resize-vm.md) | Gets resources from the managed resource group, and resizes the VMs. |
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
description: Describes how to create and publish an Azure Managed Application in
-+ Last updated 03/21/2023
azure-resource-manager Publish Service Catalog Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-bring-your-own-storage.md
description: Describes how to bring your own storage to create and publish an Az
-+ Last updated 03/21/2023
azure-resource-manager Managed Application Define Create Cli Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-define-create-cli-sample.md
- Title: Create managed application definition - Azure CLI
-description: Provides an Azure CLI script sample that publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.
-- Previously updated : 03/07/2022----
-# Create a managed application definition to service catalog and deploy managed application from service catalog with Azure CLI
-
-This script publishes a managed application definition to a service catalog and then deploys a managed application definition from the service catalog.
---
-## Sample script
--
-### Run the script
--
-## Clean up resources
--
-```azurecli
-az group delete --name $appResourceGroup -y
-az group delete --name $appDefinitionResourceGroup -y
-```
-
-## Sample reference
-
-This script uses the following command to create the managed application definition. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [az managedapp definition create](/cli/azure/managedapp/definition#az-managedapp-definition-create) | Create a managed application definition. Provide the package that contains the required files. |
-
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-resource-manager Managed Application Powershell Sample Create Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-powershell-sample-create-definition.md
- Title: Create managed application definition - Azure PowerShell
-description: Provides an Azure PowerShell script sample that creates a managed application definition in the Azure subscription.
--- Previously updated : 10/27/2017---
-# Create a managed application definition with PowerShell
-
-This script publishes a managed application definition to a service catalog.
--
-## Sample script
-
-[!code-powershell[main](../../../../powershell_scripts/managed-applications/create-definition/create-definition.ps1 "Create definition")]
--
-## Script explanation
-
-This script uses the following command to create the managed application definition. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzManagedApplicationDefinition](/powershell/module/az.resources/new-azmanagedapplicationdefinition) | Create a managed application definition. Provide the package that contains the required files. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on PowerShell, see [Azure PowerShell documentation](/powershell/azure/get-started-azureps).
azure-resource-manager Managed Application Powershell Sample Get Managed Group Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-powershell-sample-get-managed-group-resize-vm.md
- Title: Get managed resource group & resize VMs - Azure PowerShell
-description: Provides Azure PowerShell sample script that gets a managed resource group for an Azure Managed Application. The script resizes VMs.
--- Previously updated : 10/27/2017---
-# Get resources in a managed resource group and resize VMs with PowerShell
-
-This script retrieves resources from a managed resource group, and resizes the VMs in that resource group.
--
-## Sample script
-
-[!code-powershell[main](../../../../powershell_scripts/managed-applications/get-application/get-application.ps1 "Get application")]
--
-## Script explanation
-
-This script uses the following commands to deploy the managed application. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [Get-AzManagedApplication](/powershell/module/az.resources/get-azmanagedapplication) | List managed applications. Provide resource group name to focus the results. |
-| [Get-AzResource](/powershell/module/az.resources/get-azresource) | List resources. Provide a resource group and resource type to focus the result. |
-| [Update-AzVM](/powershell/module/az.compute/update-azvm) | Update a virtual machine's size. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on PowerShell, see [Azure PowerShell documentation](/powershell/azure/get-started-azureps).
azure-resource-manager Managed Application Poweshell Sample Create Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-poweshell-sample-create-application.md
- Title: Azure PowerShell script sample - Deploy a managed application
-description: Provides Azure PowerShell sample script sample that deploys a managed application definition to the subscription.
--- Previously updated : 10/27/2017---
-# Deploy a managed application for a service catalog with PowerShell
-
-This script deploys a managed application definition from the service catalog.
---
-## Sample script
-
-[!code-powershell[main](../../../../powershell_scripts/managed-applications/create-application/create-application.ps1 "Create application")]
--
-## Script explanation
-
-This script uses the following command to deploy the managed application. Each command in the table links to command-specific documentation.
-
-| Command | Notes |
-|||
-| [New-AzManagedApplication](/powershell/module/az.resources/new-azmanagedapplication) | Create a managed application. Provide the definition ID and parameters for the template. |
--
-## Next steps
-
-* For an introduction to managed applications, see [Azure Managed Application overview](../overview.md).
-* For more information on PowerShell, see [Azure PowerShell documentation](/powershell/azure/get-started-azureps).
azure-resource-manager Manage Resource Groups Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-cli.md
Title: Manage resource groups - Azure CLI
description: Use Azure CLI to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. Previously updated : 09/10/2021- Last updated : 03/31/2023
Learn how to use Azure CLI with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using Azure CLI](manage-resources-cli.md).
+## Prerequisites
+
+* Azure CLI. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+* After installing, sign in for the first time. For more information, see [How to sign into the Azure CLI](/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli).
+ ## What is a resource group A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to add resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
For more information about how Azure Resource Manager orders the deletion of res
You can deploy Azure resources by using Azure CLI, or by deploying an Azure Resource Manager (ARM) template or Bicep file.
+### Deploy resources by using storage operations
+ The following example creates a storage account. The name you provide for the storage account must be unique across Azure. ```azurecli-interactive az storage account create --resource-group exampleGroup --name examplestore --location westus --sku Standard_LRS --kind StorageV2 ```
+### Deploy resources by using an ARM template or Bicep file
+ To deploy an ARM template or Bicep file, use [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create). ```azurecli-interactive az deployment group create --resource-group exampleGroup --template-file storage.bicep ```
+The following example shows the Bicep file named `storage.bicep` that you're deploying:
+
+```bicep
+@minLength(3)
+@maxLength(11)
+param storagePrefix string
+
+var uniqueStorageName = concat(storagePrefix, uniqueString(resourceGroup().id))
+
+resource uniqueStorage 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: uniqueStorageName
+ location: 'eastus'
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ supportsHttpsTrafficOnly: true
+ }
+}
+```
+ For more information about deploying an ARM template, see [Deploy resources with Resource Manager templates and Azure CLI](../templates/deploy-cli.md). For more information about deploying a Bicep file, see [Deploy resources with Bicep and Azure CLI](../bicep/deploy-cli.md).
azure-resource-manager Manage Resource Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-powershell.md
Title: Manage resource groups - Azure PowerShell
description: Use Azure PowerShell to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups. Previously updated : 09/10/2021- Last updated : 03/31/2023
Learn how to use Azure PowerShell with [Azure Resource Manager](overview.md) to manage your Azure resource groups. For managing Azure resources, see [Manage Azure resources by using Azure PowerShell](manage-resources-powershell.md).
+## Prerequisites
+
+* Azure PowerShell. For more information, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+
+* After installing, sign in for the first time. For more information, see [Sign in](/powershell/azure/install-az-ps#sign-in).
+ ## What is a resource group A resource group is a container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to add resources to resource groups based on what makes the most sense for your organization. Generally, add resources that share the same lifecycle to the same resource group so you can easily deploy, update, and delete them as a group.
For more information about how Azure Resource Manager orders the deletion of res
You can deploy Azure resources by using Azure PowerShell, or by deploying an Azure Resource Manager (ARM) template or Bicep file.
+### Deploy resources by using storage operations
+ The following example creates a storage account. The name you provide for the storage account must be unique across Azure. ```azurepowershell-interactive New-AzStorageAccount -ResourceGroupName exampleGroup -Name examplestore -Location westus -SkuName "Standard_LRS" ```
+### Deploy resources by using an ARM template or Bicep file
+ To deploy an ARM template or Bicep file, use [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment). ```azurepowershell-interactive New-AzResourceGroupDeployment -ResourceGroupName exampleGroup -TemplateFile storage.bicep ```
+The following example shows the Bicep file named `storage.bicep` that you're deploying:
+
+```bicep
+@minLength(3)
+@maxLength(11)
+param storagePrefix string
+
+var uniqueStorageName = concat(storagePrefix, uniqueString(resourceGroup().id))
+
+resource uniqueStorage 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+ name: uniqueStorageName
+ location: 'eastus'
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ supportsHttpsTrafficOnly: true
+ }
+}
+```
+ For more information about deploying an ARM template, see [Deploy resources with ARM templates and Azure PowerShell](../templates/deploy-powershell.md). For more information about deploying a Bicep file, see [Deploy resources with Bicep and Azure PowerShell](../bicep/deploy-powershell.md).
To get the locks for a resource group, use [Get-AzResourceLock](/powershell/modu
Get-AzResourceLock -ResourceGroupName exampleGroup ```
+To delete a lock, use [Remove-AzResourceLock](/powershell/module/az.resources/remove-azresourcelock).
+
+```azurepowershell-interactive
+$lockId = (Get-AzResourceLock -ResourceGroupName exampleGroup).LockId
+Remove-AzResourceLock -LockId $lockId
+```
+ For more information, see [Lock resources with Azure Resource Manager](lock-resources.md). ## Tag resource groups
azure-signalr Concept Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md
description: An overview of connection string in Azure SignalR Service, how to g
Previously updated : 03/25/2022 Last updated : 03/29/2023 # Connection string in Azure SignalR Service
-Connection string is an important concept that contains information about how to connect to SignalR service. In this article, you'll learn the basics of connection string and how to configure it in your application.
+A connection string contains information about how to connect to Azure Signal Service (ASRS). In this article, you learn the basics of connection string and how to configure it in your application.
-## What is connection string
+## What is a connection string
-When an application needs to connect to Azure SignalR Service, it will need the following information:
+When an application needs to connect to Azure SignalR Service, it needs the following information:
-- The HTTP endpoint of the SignalR service instance-- How to authenticate with the service endpoint
+- The HTTP endpoint of the SignalR service instance.
+- The way to authenticate with the service endpoint.
-Connection string contains such information.
+A connection string contains such information.
-## What connection string looks like
+## What a connection string looks like
-A connection string consists of a series of key/value pairs separated by semicolons(;) and we use an equal sign(=) to connect each key and its value. Keys aren't case sensitive.
+A connection string consists of a series of key/value pairs separated by semicolons(;). An equal sign(=) to connect each key and its value. Keys aren't case sensitive.
For example, a typical connection string may look like this:
-```
-Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
-```
+> Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
-You can see in the connection string, there are two main information:
+The connection string contains:
-- `Endpoint=https://<resource_name>.service.signalr.net` is the endpoint URL of the resource-- `AccessKey=<access_key>` is the key to authenticate with the service. When access key is specified in connection string, SignalR service SDK will use it to generate a token that can be validated by the service.
+- `Endpoint=https://<resource_name>.service.signalr.net`: The endpoint URL of the resource.
+- `AccessKey=<access_key>`: The key to authenticate with the service. When an access key is specified in the connection string, the SignalR Service SDK uses it to generate a token that is validated by the service.
+- `Version`: The version of the connection string. The default value is `1.0`.
The following table lists all the valid names for key/value pairs in the connection string.
-| key | Description | Required | Default value | Example value |
-| -- | -- | -- | -- | |
-| Endpoint | The URI of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` |
-| Port | The port that your ASRS instance is listening on. | N | 80/443, depends on endpoint uri schema | 8080 |
-| Version | The version of given connection string. | N | 1.0 | 1.0 |
-| ClientEndpoint | The URI of your reverse proxy, like App Gateway or API Management | N | null | `https://foo.bar` |
-| AuthType | The auth type, we'll use AccessKey to authorize requests by default. **Case insensitive** | N | null | azure, azure.msi, azure.app |
+| Key | Description | Required | Default value| Example value
+| | | | | |
+| Endpoint | The URL of your ASRS instance. | Y | N/A |`https://foo.service.signalr.net` |
+| Port | The port that your ASRS instance is listening on. on. | N| 80/443, depends on the endpoint URI schema | 8080|
+| Version| The version of given connection. string. | N| 1.0 | 1.0 |
+| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N| null | `https://foo.bar` |
+| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app |
### Use AccessKey
-Local auth method will be used when `AuthType` is set to null.
+The local auth method is used when `AuthType` is set to null.
-| key | Description | Required | Default value | Example value |
-| | - | -- | - | - |
-| AccessKey | The key string in base64 format for building access token usage. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
+| Key | Description| Required | Default value | Example value|
+| | | | | |
+| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ |
### Use Azure Active Directory
-Azure AD auth method will be used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
+The Azure AD auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`.
-| key | Description | Required | Default value | Example value |
+| Key| Description| Required | Default value | Example value|
| -- | | -- | - | |
-| ClientId | A guid represents an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` |
-| TenantId | A guid represents an organization in Azure Active Directory. | N | null | `00000000-0000-0000-0000-000000000000` |
-| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` |
-| ClientCertPath | The absolute path of a cert file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` |
+| ClientId | A GUID of an Azure application or an Azure identity. | N| null| `00000000-0000-0000-0000-000000000000` |
+| TenantId | A GUID of an organization in Azure Active Directory. | N| null| `00000000-0000-0000-0000-000000000000` |
+| ClientSecret | The password of an Azure application instance. | N| null| `***********************.****************` |
+| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N| null| `/usr/local/cert/app.cert` |
-Different `TokenCredential` will be used to generate Azure AD tokens with the respect of params you have given.
+A different `TokenCredential` is used to generate Azure AD tokens depending on the parameters you have given.
- `type=azure`
- [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) will be used.
+ [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) is used.
- ```
+ ```text
Endpoint=xxx;AuthType=azure ``` - `type=azure.msi`
- 1. User-assigned managed identity will be used if `clientId` has been given in connection string.
+ 1. A user-assigned managed identity is used if `clientId` has been given in connection string.
```
- Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000
+ Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id>
```
- - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) will be used.
+ - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) is used.
- 2. Otherwise system-assigned managed identity will be used.
+ 1. A system-assigned managed identity is used.
- ```
+ ```text
Endpoint=xxx;AuthType=azure.msi; ```
- - [ManagedIdentityCredential()](/dotnet/api/azure.identity.managedidentitycredential) will be used.
-
+ - [ManagedIdentityCredential()](/dotnet/api/azure.identity.managedidentitycredential) is used.
- `type=azure.app` `clientId` and `tenantId` are required to use [Azure AD application with service principal](../active-directory/develop/howto-create-service-principal-portal.md).
- 1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) will be used if `clientSecret` is given.
- ```
- Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000;TenantId=00000000-0000-0000-0000-000000000000;clientScret=******
- ```
+ 1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) is used if `clientSecret` is given.
- 2. [ClientCertificateCredential(clientId, tenantId, clientCertPath)](/dotnet/api/azure.identity.clientcertificatecredential) will be used if `clientCertPath` is given.
+ ```text
+ Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id>;clientSecret=<client_secret>>
```
- Endpoint=xxx;AuthType=azure.msi;ClientId=00000000-0000-0000-0000-000000000000;TenantId=00000000-0000-0000-0000-000000000000;clientCertPath=/path/to/cert
+
+ 1. [ClientCertificateCredential(clientId, tenantId, clientCertPath)](/dotnet/api/azure.identity.clientcertificatecredential) is used if `clientCertPath` is given.
+
+ ```text
+ Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id>;TenantId=<tenant_id>;clientCertPath=</path/to/cert>
```
-## How to get my connection strings
+## How to get connection strings
### From Azure portal Open your SignalR service resource in Azure portal and go to `Keys` tab.
-You'll see two connection strings (primary and secondary) in the following format:
+You see two connection strings (primary and secondary) in the following format:
> Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;Version=1.0;
You can also use Azure CLI to get the connection string:
az signalr key list -g <resource_group> -n <resource_name> ```
-### For using Azure AD application
+## Connect with an Azure AD application
-You can use [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
+You can use an [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed.
-To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string will look as follows:
+To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string looks as follows:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.app;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0; ```
-For more information about how to authenticate using Azure AD application, see this [article](signalr-howto-authorize-application.md).
+For more information about how to authenticate using Azure AD application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md).
-### For using Managed identity
+## Authenticate with Managed identity
-You can also use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
+You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service.
-There are two types of managed identities, to use system assigned identity, you just need to add `AuthType=azure.msi` to the connection string:
+To use a system assigned identity, add `AuthType=azure.msi` to the connection string:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;Version=1.0; ```
-SignalR service SDK will automatically use the identity of your app server.
+The SignalR service SDK automatically uses the identity of your app server.
-To use user assigned identity, you also need to specify the client ID of the managed identity:
+To use a user assigned identity, include the client ID of the managed identity in the connection string:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.msi;ClientId=<client_id>;Version=1.0; ```
-For more information about how to configure managed identity, see this [article](signalr-howto-authorize-managed-identity.md).
+For more information about how to configure managed identity, see [Authorize from Managed Identity](signalr-howto-authorize-managed-identity.md).
> [!NOTE]
-> It's highly recommended to use Azure AD to authenticate with SignalR service as it's a more secure way comparing to using access key. If you don't use access key authentication at all, consider to completely disable it (go to Azure portal -> Keys -> Access Key -> Disable). If you still use access key, it's highly recommended to rotate them regularly (more information can be found [here](signalr-howto-key-rotation.md)).
-
-### Use connection string generator
+> It's highly recommended to use managed identity to authenticate with SignalR service as it's a more secure way compared to using access keys. If you don't use access keys authentication, consider completely disabling it (go to Azure portal -> Keys -> Access Key -> Disable). If you still use access keys, it's highly recommended to rotate them regularly. For more information, see [Rotate access keys for Azure SignalR Service](signalr-howto-key-rotation.md).
-It may be cumbersome and error-prone to build connection strings manually.
+### Use the connection string generator
-To avoid making mistakes, we built a tool to help you generate connection string with Azure AD identities like `clientId`, `tenantId`, etc.
-
-To use connection string generator, open your SignalR resource in Azure portal, go to `Connection strings` tab:
+It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Azure AD identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu.
:::image type="content" source="media/concept-connection-string/generator.png" alt-text="Screenshot showing connection string generator of SignalR service in Azure portal.":::
-In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string will be automatically generated. You can copy and use it in your application.
+In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application.
> [!NOTE]
-> Everything you input on this page won't be saved after you leave the page (since they're only client side information), so please copy and save it in a secure place for your application to use.
+> Information you enter won't be saved after you leave the page. You will need to copy and save your connection string to use in your application.
-> [!NOTE]
-> For more information about how access tokens are generated and validated, see this [article](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md#authenticate-via-azure-signalr-service-accesskey).
+For more information about how access tokens are generated and validated, see [Authenticate via Azure Active Directory Token](signalr-reference-data-plane-rest-api.md#authenticate-via-azure-active-directory-token-azure-ad-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) .
## Client and server endpoints
-Connection string contains the HTTP endpoint for app server to connect to SignalR service. This is also the endpoint server will return to clients in negotiate response, so client can also connect to the service.
+A connection string contains the HTTP endpoint for app server to connect to SignalR service. The server returns the HTTP endpoint to the clients in a negotiate response, so the client can connect to the service.
-But in some applications there may be an extra component in front of SignalR service and all client connections need to go through that component first (to gain extra benefits like network security, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides such functionality).
+In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security.
-In such case, the client will need to connect to an endpoint different than SignalR service. Instead of manually replace the endpoint at client side, you can add `ClientEndpoint` to connecting string:
+In such case, the client needs to connect to an endpoint different than SignalR service. Instead of manually replacing the endpoint at the client side, you can add `ClientEndpoint` to connection string:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ClientEndpoint=https://<url_to_app_gateway>;Version=1.0; ```
-Then app server will return the right endpoint url in negotiate response for client to connect.
-
-> [!NOTE]
-> For more information about how clients get service url through negotiate, see this [article](signalr-concept-internals.md#client-connections).
+The app server returns a response to the client's negotiate request containing the correct endpoint URL for the client to connect to. For more information about client connections, see [Azure SignalR Service internals](signalr-concept-internals.md#client-connections).
-Similarly, when server wants to make [server connections](signalr-concept-internals.md#server-connections) or call [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to service, SignalR service may also be behind another service like Application Gateway. In that case, you can use `ServerEndpoint` to specify the actual endpoint for server connections and REST APIs:
+Similarly, the server wants to make [server connections](signalr-concept-internals.md#azure-signalr-service-internals) or call [REST APIs](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md) to the service, the SignalR service may also be behind another service like [Azure Application Gateway](../application-gateway/overview.md). In that case, you can use `ServerEndpoint` to specify the actual endpoint for server connections and REST APIs:
-```
+```text
Endpoint=https://<resource_name>.service.signalr.net;AccessKey=<access_key>;ServerEndpoint=https://<url_to_app_gateway>;Version=1.0; ``` ## Configure connection string in your application
-There are two ways to configure connection string in your application.
+There are two ways to configure a connection string in your application.
You can set the connection string when calling `AddAzureSignalR()` API:
You can set the connection string when calling `AddAzureSignalR()` API:
services.AddSignalR().AddAzureSignalR("<connection_string>"); ```
-Or you can call `AddAzureSignalR()` without any arguments, then service SDK will read the connection string from a config named `Azure:SignalR:ConnectionString` in your [config providers](/dotnet/core/extensions/configuration-providers).
+Or you can call `AddAzureSignalR()` without any arguments. The service SDK returns the connection string from a config named `Azure:SignalR:ConnectionString` in your [configuration provider](/dotnet/core/extensions/configuration-providers).
-In a local development environment, the config is stored in file (appsettings.json or secrets.json) or environment variables, so you can use one of the following ways to configure connection string:
+In a local development environment, the configuration is stored in a file (*appsettings.json* or *secrets.json*) or environment variables. You can use one of the following ways to configure connection string:
- Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)-- Set connection string to environment variable named `Azure__SignalR__ConnectionString` (colon needs to replaced with double underscore in [environment variable config provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider)).
+- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).
-In production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up config provider for those services.
+In a production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up configuration provider for those services.
> [!NOTE]
-> Even you're directly setting connection string using code, it's not recommended to hardcode the connection string in source code, so you should still first read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
+> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`.
### Configure multiple connection strings
-Azure SignalR Service also allows server to connect to multiple service endpoints at the same time, so it can handle more connections, which are beyond one service instance's limit. Also if one service instance is down, other service instances can be used as backup. For more information about how to use multiple instances, see this [article](signalr-howto-scale-multi-instances.md).
+Azure SignalR Service also allows the server to connect to multiple service endpoints at the same time, so it can handle more connections that are beyond a service instance's limit. Also, when one service instance is down the other service instances can be used as backup. For more information about how to use multiple instances, see [Scale SignalR Service with multiple instances](signalr-howto-scale-multi-instances.md).
There are also two ways to configure multiple instances: -- Through code
+- Through code:
```cs services.AddSignalR().AddAzureSignalR(options =>
There are also two ways to configure multiple instances:
You can assign a name and type to each service endpoint so you can distinguish them later. -- Through config
+- Through configuration:
- You can use any supported config provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
+ You can use any supported configuration provider (secret manager, environment variables, key vault, etc.) to store connection strings. Take secret manager as an example:
```bash dotnet user-secrets set Azure:SignalR:ConnectionString:name_a <connection_string_1>
There are also two ways to configure multiple instances:
dotnet user-secrets set Azure:SignalR:ConnectionString:name_c:secondary <connection_string_3> ```
- You can also assign name and type to each endpoint, by using a different config name in the following format:
+ You can assign a name and type to each endpoint by using a different config name in the following format:
- ```
+ ```text
Azure:SignalR:ConnectionString:<name>:<type> ```
azure-signalr Howto Network Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-network-access-control.md
Previously updated : 05/06/2020 Last updated : 03/29/2023 # Configure network access control
-Azure SignalR Service enables you to secure and control the level of access to your service endpoint, based on the request type and subset of networks used. When network rules are configured, only applications requesting data over the specified set of networks can access your Azure SignalR Service.
+Azure SignalR Service enables you to secure and control the level of access to your service endpoint based on the request type and subset of networks. When network rules are configured, only applications requesting data over the specified set of networks can access your SignalR Service.
-Azure SignalR Service has a public endpoint that is accessible through the internet. You can also create [Private Endpoints for your Azure SignalR Service](howto-private-endpoints.md). Private Endpoint assigns a private IP address from your VNet to the Azure SignalR Service, and secures all traffic between your VNet and the Azure SignalR Service over a private link. The Azure SignalR Service network access control provides access control for both public endpoint and private endpoints.
+SignalR Service has a public endpoint that is accessible through the internet. You can also create [private endpoints for your Azure SignalR Service](howto-private-endpoints.md). A private endpoint assigns a private IP address from your VNet to the SignalR Service, and secures all traffic between your VNet and the SignalR Service over a private link. The SignalR Service network access control provides access control for both public and private endpoints.
-Optionally, you can choose to allow or deny certain types of requests for public endpoint and each private endpoint. For example, you can block all [Server Connections](signalr-concept-internals.md#server-connections) from public endpoint and make sure they only originate from a specific VNet.
+Optionally, you can choose to allow or deny certain types of requests for the public endpoint and each private endpoint. For example, you can block all [Server Connections](signalr-concept-internals.md#application-server-connections) from public endpoint and make sure they only originate from a specific VNet.
-An application that accesses an Azure SignalR Service when network access control rules are in effect still requires proper authorization for the request.
+An application that accesses a SignalR Service when network access control rules are in effect still requires proper authorization for the request.
## Scenario A - No public traffic
-To completely deny all public traffic, you should first configure the public network rule to allow no request type. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications.
+To completely deny all public traffic, first configure the public network rule to allow no request type. Then, you can configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications.
## Scenario B - Only client connections from public network
-In this scenario, you can configure the public network rule to only allow [Client Connections](signalr-concept-internals.md#client-connections) from public network. You can then configure private network rules to allow other types of requests originating from a specific VNet. This configuration hides your app servers from public network and establishes secure connections between your app servers and Azure SignalR Service.
+In this scenario, you can configure the public network rule to only allow [Client Connections](signalr-concept-internals.md#client-connections) from the public network. You can then configure private network rules to allow other types of requests originating from a specific VNet. This configuration hides your app servers from the public network and establishes secure connections between your app servers and SignalR Service.
## Managing network access control
-You can manage network access control for Azure SignalR Service through the Azure portal.
+You can manage network access control for SignalR Service through the Azure portal.
-### Azure portal
-
-1. Go to the Azure SignalR Service you want to secure.
-
-1. Click on the settings menu called **Network access control**.
+1. Go to the SignalR Service instance you want to secure.
+1. Select **Network access control** from the left side menu.
![Network ACL on portal](media/howto-network-access-control/portal.png) 1. To edit default action, toggle the **Allow/Deny** button. > [!TIP]
- > Default action is the action we take when there is no ACL rule matches. For example, if the default action is **Deny**, then request types that are not explicitly approved below will be denied.
+ > The default action is the action the service takes when no access control rule matches a request. For example, if the default action is **Deny**, then the request types that are not explicitly approved will be denied.
1. To edit public network rule, select allowed types of requests under **Public network**.
You can manage network access control for Azure SignalR Service through the Azur
![Edit private endpoint ACL on portal ](media/howto-network-access-control/portal-private-endpoint.png)
-1. Click **Save** to apply your changes.
+1. Select **Save** to apply your changes.
## Next steps
azure-signalr Signalr Concept Internals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-internals.md
ms.devlang: csharp Previously updated : 11/13/2019 Last updated : 03/29/2023 # Azure SignalR Service internals Azure SignalR Service is built on top of ASP.NET Core SignalR framework. It also supports ASP.NET SignalR by reimplementing ASP.NET SignalR's data protocol on top of the ASP.NET Core framework.
-You can easily migrate a local ASP.NET Core SignalR application or ASP.NET SignalR application to work with SignalR Service, with a few lines of code change.
+You can easily migrate a local ASP.NET Core SignalR or an ASP.NET SignalR application to work with SignalR Service, with by changing few lines of code.
-The diagram below describes the typical architecture when you use the SignalR Service with your application server.
+The diagram describes the typical architecture when you use the SignalR Service with your application server.
The differences from self-hosted ASP.NET Core SignalR application are discussed as well. ![Architecture](./media/signalr-concept-internals/arch.png)
-## Server connections
+## Application server connections
-Self-hosted ASP.NET Core SignalR application server listens to and connects clients directly.
+A self-hosted ASP.NET Core SignalR application server listens to and connects clients directly.
-With SignalR Service, the application server is no longer accepting persistent client connections, instead:
+With SignalR Service, the application server no longer accepts persistent client connections, instead:
1. A `negotiate` endpoint is exposed by Azure SignalR Service SDK for each hub.
-1. This endpoint will respond to client's negotiation requests and redirect clients to SignalR Service.
-1. Eventually, clients will be connected to SignalR Service.
+1. The endpoint responds to client negotiation requests and redirect clients to SignalR Service.
+1. The clients connect to SignalR Service.
For more information, see [Client connections](#client-connections).
-Once the application server is started,
-- For ASP.NET Core SignalR, Azure SignalR Service SDK opens 5 WebSocket connections per hub to SignalR Service. -- For ASP.NET SignalR, Azure SignalR Service SDK opens 5 WebSocket connections per hub to SignalR Service, and one per application WebSocket connection.
+Once the application server is started:
-5 WebSocket connections is the default value that can be changed in [configuration](https://github.com/Azure/azure-signalr/blob/dev/docs/run-asp-net-core.md#connectioncount). Please note that this configures the initial server connection count the SDK starts. While the app server is connected to the SignalR service, the Azure SignalR service might send load-balancing messages to the server and the SDK will start new server connections to the service for better performance.
+- For ASP.NET Core SignalR: Azure SignalR Service SDK opens five WebSocket connections per hub to SignalR Service.
+- For ASP.NET SignalR: Azure SignalR Service SDK opens five WebSocket connections per hub to SignalR Service, and one per application WebSocket connection.
-Messages to and from clients will be multiplexed into these connections.
-These connections will remain connected to the SignalR Service all the time. If a server connection is disconnected for network issue,
-- all clients that are served by this server connection disconnect (for more information about it, see [Data transmit between client and server](#data-transmit-between-client-and-server));-- the server connection starts reconnecting automatically.
+The initial number of connections defaults to 5 and is configurable using the `InitialHubServerConnectionCount` option in the SignalR Service SDK. For more information, see [configuration](https://github.com/Azure/azure-signalr/blob/dev/docs/run-asp-net-core.md#maxhubserverconnectioncount).
+
+While the application server is connected to the SignalR service, the Azure SignalR service may send load-balancing messages to the server. Then, the SDK starts new server connections to the service for better performance. Messages to and from clients are multiplexed into these connections.
+
+Server connections are persistently connected to the SignalR Service. If a server connection is disconnected due to a network issue:
+
+- All clients served by this server connection disconnect. For more information, see [Data transmission between client and server](#data-transmission-between-client-and-server).
+- The server automatically reconnects the clients.
## Client connections
-When you use the SignalR Service, clients connect to SignalR Service instead of application server.
-There are two steps to establish persistent connections between the client and the SignalR Service.
+When you use the SignalR Service, clients connect to the service instead of the application server.
+There are three steps to establish persistent connections between the client and the SignalR Service.
-1. Client sends a negotiate request to the application server. With Azure SignalR Service SDK, application server returns a redirect response with SignalR Service's URL and access token.
+1. A client sends a negotiate request to the application server.
+1. The application server uses Azure SignalR Service SDK to return a redirect response containing the SignalR Service URL and access token.
- For ASP.NET Core SignalR, a typical redirect response looks like: ```
There are two steps to establish persistent connections between the client and t
} ```
-1. After receiving the redirect response, client uses the new URL and access token to start the normal process to connect to SignalR Service.
+1. After the client receives the redirect response, it uses the URL and access token to connect to SignalR Service.
+
+To learn more about ASP.NET Core SignalR's, see [Transport Protocols](https://github.com/aspnet/SignalR/blob/release/2.2/specs/TransportProtocols.md).
-Learn more about ASP.NET Core SignalR's [transport protocols](https://github.com/aspnet/SignalR/blob/release/2.2/specs/TransportProtocols.md).
+## Data transmission between client and server
-## Data transmit between client and server
+When a client is connected to the SignalR Service, the service runtime finds a server connection to serve this client.
-When a client is connected to the SignalR Service, service runtime will find a server connection to serve this client
-- This step happens only once, and is a one-to-one mapping between the client and server connections.
+- This step happens only once, and is a one-to-one mapping between the client and server connection.
- The mapping is maintained in SignalR Service until the client or server disconnects. At this point, the application server receives an event with information from the new client. A logical connection to the client is created in the application server. The data channel is established from client to application server, via SignalR Service.
-SignalR Service transmits data from the client to the pairing application server. And data from the application server will be sent to the mapped clients.
+SignalR Service transmits data from the client to the pairing application server. Data from the application server is sent to the mapped clients.
+
+SignalR Service doesn't save or store customer data, all customer data received is transmitted to the target server or clients in real-time.
+
+The Azure SignalR Service acts as a logical transport layer between application server and clients. All persistent connections are offloaded to SignalR Service. As a result, the application server only needs to handle the business logic in the hub class, without worrying about client connections.
+
+## Next steps
-SignalR Service does not save or store customer data, all customer data received is transmitted to target server or clients in real-time.
+To learn more about Azure SignalR SDKs, see:
-As you can see, the Azure SignalR Service is essentially a logical transport layer between application server and clients. All persistent connections are offloaded to SignalR Service.
-Application server only needs to handle the business logic in hub class, without worrying about client connections.
+- [ASP.NET Core SignalR](/aspnet/core/signalr/introduction)
+- [ASP.NET SignalR](/aspnet/signalr/overview/getting-started/introduction-to-signalr)
+- [ASP.NET code samples](https://github.com/aspnet/AzureSignalR-samples)
azure-signalr Signalr Concept Messages And Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-messages-and-connections.md
description: An overview of key concepts about messages and connections in Azure
Previously updated : 08/05/2020 Last updated : 03/23/2023 # Messages and connections in Azure SignalR Service
-The billing model for Azure SignalR Service is based on the number of connections and the number of messages. This article explains how messages and connections are defined and counted for billing.
+The billing model for Azure SignalR Service is based on the number of connections and the number of outbound messages from the service. This article explains how messages and connections are defined and counted for billing.
## Message formats
Azure SignalR Service supports the same formats as ASP.NET Core SignalR: [JSON](
The following limits apply for Azure SignalR Service messages: * Client messages:
- * For long polling or server side events, the client cannot send messages larger than 1MB.
- * There is no size limit for Websockets for service.
- * App server can set a limit for client message size. Default is 32KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
- * For serverless, the message size is limited by upstream implementation, but under 1MB is recommended.
+ * For long polling or server side events, the client can't send messages larger than 1 MB.
+ * There's no size limit for WebSocket for service.
+ * App server can set a limit for client message size. Default is 32 KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
+ * For serverless, the message size is limited by upstream implementation, but under 1 MB is recommended.
* Server messages:
- * There is no limit to server message size, but under 16MB is recommended.
- * App server can set a limit for client message size. Default is 32KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
+ * There's no limit to server message size, but under 16 MB is recommended.
+ * App server can set a limit for client message size. Default is 32 KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
* Serverless:
- * Rest API: 1MB for message body, 16KB for headers.
- * There is no limit for Websockets, [management SDK persistent mode](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md), but under 16MB is recommended.
+ * Rest API: 1 MB for message body, 16 KB for headers.
+ * There's no limit for WebSocket, [management SDK persistent mode](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md), but under 16 MB is recommended.
-For Websockets clients, large messages are split into smaller messages that are no more than 2 KB each and transmitted separately. SDKs handle message splitting and assembling. No developer efforts are needed.
+For WebSocket clients, large messages are split into smaller messages that are no more than 2 KB each and transmitted separately. SDKs handle message splitting and assembling. No developer efforts are needed.
Large messages do negatively affect messaging performance. Use smaller messages whenever possible, and test to determine the optimal message size for each use-case scenario. ## How messages are counted for billing
-For billing, only outbound messages from Azure SignalR Service are counted. Ping messages between clients and servers are ignored.
+Messages sent into the service are inbound messages and messages sent out of the service are outbound messages. Only outbound messages from Azure SignalR Service are counted for billing. Ping messages between clients and servers are ignored.
Messages larger than 2 KB are counted as multiple messages of 2 KB each. The message count chart in the Azure portal is updated every 100 messages per hub. For example, imagine you have one application server, and three clients:
-App server broadcasts a 1-KB message to all connected clients, the message from app server to the service is considered free inbound message. Only the three messages sending from service to each of the client are billed as outbound messages.
+* When the application server broadcasts a 1-KB message to all connected clients, the message from the application server to the service is considered a free inbound message.
-Client A sends a 1-KB message to another client B, without going through app server. The message from client A to service is free inbound message. The message from service to client B is billed as outbound message.
+* When *client A* sends a 1 KB inbound message to *client B*, without going through app server, the message is a free inbound message. The message routed from service to *client B* is billed as an outbound message.
-If you have three clients and one application server. One client sends a 4-KB message to let the server broadcast to all clients. The billed message count is eight: one message from the service to the application server and three messages from the service to the clients. Each message is counted as two 2-KB messages.
+* If you have three clients and one application server, when one client sends a 4-KB message for the server broadcast to all clients, the billed message count is eight:
-## How connections are counted
+ * One message from the service to the application server.
+ * Three messages from the service to the clients. Each message is counted as two 2-KB messages.
-There are server connections and client connections with Azure SignalR Service. By default, each application server starts with five initial connections per hub, and each client has one client connection.
+## How connections are counted
-For example, assume that you have two application servers and you define five hubs in code. The server connection count will be 50: 2 app servers * 5 hubs * 5 connections per hub.
+The Azure SignalR Service creates application server and client connections. By default, each application server starts with five initial connections per hub, and each client has one client connection.
-The connection count shown in the Azure portal includes server connections, client connections, diagnostic connections, and live trace connections. The connection types are defined in the following list:
+For example, assume that you have two application servers and you define five hubs in code. The server connection count is 50: (2 app servers * 5 hubs * 5 connections per hub).
-- **Server connection**: Connects Azure SignalR Service and the app server.-- **Client connection**: Connects Azure SignalR Service and the client app.-- **Diagnostic connection**: A special kind of client connection that can produce a more detailed log, which might affect performance. This kind of client is designed for troubleshooting.-- **Live trace connection**: Connects to the live trace endpoint and receives live traces of Azure SignalR Service.
-
-Note that a live trace connection isn't counted as a client connection or as a server connection.
+The connection count shown in the Azure portal includes server, client, diagnostic, and live trace connections. The connection types are defined in the following list:
-ASP.NET SignalR calculates server connections in a different way. It includes one default hub in addition to hubs that you define. By default, each application server needs five more initial server connections. The initial connection count for the default hub stays consistent with other hubs.
+* **Server connection**: Connects Azure SignalR Service and the app server.
+* **Client connection**: Connects Azure SignalR Service and the client app.
+* **Diagnostic connection**: A special type of client connection that can produce a more detailed log, which might affect performance. This kind of client is designed for troubleshooting.
+* **Live trace connection**: Connects to the live trace endpoint and receives live traces of Azure SignalR Service.
-The service and the application server keep syncing connection status and making adjustment to server connections to get better performance and service stability. So you might see server connection number changes from time to time.
+A live trace connection isn't counted as a client connection or as a server connection.
-## How inbound/outbound traffic is counted
+ASP.NET SignalR calculates server connections in a different way. It includes one default hub in addition to hubs that you define. By default, each application server needs five more initial server connections. The initial connection count for the default hub stays consistent with other hubs.
-Message sent into the service is inbound message. Message sent out of the service is outbound message. Traffic is calculated in bytes.
+The service and the application server keep syncing connection status and making adjustments to server connections to get better performance and service stability. So you may see changes in the number of server connections in your running service.
## Related resources -- [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr )-- [ASP.NET Core SignalR configuration](/aspnet/core/signalr/configuration)-- [JSON](https://www.json.org/)-- [MessagePack](/aspnet/core/signalr/messagepackhubprotocol)
+* [Aggregation types in Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftsignalrservicesignalr )
+* [ASP.NET Core SignalR configuration](/aspnet/core/signalr/configuration)
+* [JSON](https://www.json.org/)
+* [MessagePack](/aspnet/core/signalr/messagepackhubprotocol)
azure-signalr Signalr Concept Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-performance.md
description: An overview of the performance and benchmark of Azure SignalR Servi
Previously updated : 11/13/2019 Last updated : 03/23/2023 # Performance guide for Azure SignalR Service One of the key benefits of using Azure SignalR Service is the ease of scaling SignalR applications. In a large-scale scenario, performance is an important factor.
-In this guide, we'll introduce the factors that affect SignalR application performance. We'll describe typical performance in different use-case scenarios. In the end, we'll introduce the environment and tools that you can use to generate a performance report.
+This article describes:
+
+* The factors that affect SignalR application performance.
+* The typical performance in different use-case scenarios.
+* The environment and tools that you can use to generate a performance report.
## Quick evaluation using metrics
- Before going through the factors that impact the performance, let's first introduce an easy way to monitor the pressure of your service. There's a metrics called **Server Load** on the Portal.
-
- <kbd>![Screenshot of the Server Load metric of Azure SignalR on Portal. The metrics shows Server Load is at about 8 percent usage. ](./media/signalr-concept-performance/server-load.png "Server Load")</kbd>
+You can easily monitor your service in the Azure portal. From the **Metrics** page of your SignalR instance, you can select the **Server Load** metrics to see the "pressure" of your service.
+
+<kbd>![Screenshot of the Server Load metric of Azure SignalR on Portal. The metrics shows Server Load is at about 8 percent usage. ](./media/signalr-concept-performance/server-load.png "Server Load")</kbd>
- It shows the computing pressure of your SignalR service. You could test on your own scenario and check this metrics to decide whether to scale up. The latency inside SignalR service would remain low if the Server Load is below 70%.
+The chart shows the computing pressure of your SignalR service. You can test your scenario and check this metric to decide whether to scale up. The latency inside SignalR service remains low if the Server Load is below 70%.
> [!NOTE] > If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <100) or single connection, you need to check [sending to small group](#small-group) or [sending to connection](#send-to-connection) for reference. In those scenarios there is large routing cost which is not included in the Server Load.
-
- Below are detailed concepts for evaluating performance.
## Term definitions
In this guide, we'll introduce the factors that affect SignalR application perfo
*Bandwidth*: The total size of all messages in 1 second.
-*Default mode*: The default working mode when an Azure SignalR Service instance was created. Azure SignalR Service expects the app server to establish a connection with it before it accepts any client connections.
+*Default mode*: The default working mode when an Azure SignalR Service instance is created. Azure SignalR Service expects the app server to establish a connection with it before it accepts any client connections.
*Serverless mode*: A mode in which Azure SignalR Service accepts only client connections. No server connection is allowed.
In this guide, we'll introduce the factors that affect SignalR application perfo
Azure SignalR Service defines seven Standard tiers for different performance capacities. This guide answers the following questions: -- What is the typical Azure SignalR Service performance for each tier?
+* What is the typical Azure SignalR Service performance for each tier?
-- Does Azure SignalR Service meet my requirements for message throughput (for example, sending 100,000 messages per second)?
+* Does Azure SignalR Service meet my requirements for message throughput (for example, sending 100,000 messages per second)?
-- For my specific scenario, which tier is suitable for me? Or how can I select the proper tier?
+* For my specific scenario, which tier is suitable for me? Or how can I select the proper tier?
-- What kind of app server (VM size) is suitable for me? How many of them should I deploy?
+* What kind of app server (VM size) is suitable for me? How many of them should I deploy?
To answer these questions, this guide first gives a high-level explanation of the factors that affect performance. It then illustrates the maximum inbound and outbound messages for every tier for typical use cases: **echo**, **broadcast**, **send to group**, and **send to connection** (peer-to-peer chatting). This guide can't cover all scenarios (and different use cases, message sizes, message sending patterns, and so on). But it provides some methods to help you: -- Evaluate your approximate requirement for the inbound or outbound messages.-- Find the proper tiers by checking the performance table.
+* Evaluate your approximate requirement for the inbound or outbound messages.
+* Find the proper tiers by checking the performance table.
## Performance insight
This section describes the performance evaluation methodologies, and then lists
*Throughput* and *latency* are two typical aspects of performance checking. For Azure SignalR Service, each SKU tier has its own throughput throttling policy. The policy defines *the maximum allowed throughput (inbound and outbound bandwidth)* as the maximum achieved throughput when 99 percent of messages have latency that's less than 1 second.
-Latency is the time span from the connection sending the message to receiving the response message from Azure SignalR Service. Let's take **echo** as an example. Every client connection adds a time stamp in the message. The app server's hub sends the original message back to the client. So the propagation delay is easily calculated by every client connection. The time stamp is attached for every message in **broadcast**, **send to group**, and **send to connection**.
+Latency is the time span from the connection sending the message to receiving the response message from Azure SignalR Service. Take **echo** as an example. Every client connection adds a time stamp in the message. The app server's hub sends the original message back to the client. So the propagation delay is easily calculated by every client connection. The time stamp is attached for every message in **broadcast**, **send to group**, and **send to connection**.
To simulate thousands of concurrent client connections, multiple VMs are created in a virtual private network in Azure. All of these VMs connect to the same Azure SignalR Service instance.
In the default mode of Azure SignalR Service, app server VMs are deployed in the
### Performance factors
-Theoretically, Azure SignalR Service capacity is limited by computation resources: CPU, memory, and network. For example, more connections to Azure SignalR Service cause the service to use more memory. For larger message traffic (for example, every message is larger than 2,048 bytes), Azure SignalR Service needs to spend more CPU cycles to process traffic. Meanwhile, Azure network bandwidth also imposes a limit for maximum traffic.
-
-The transport type is another factor that affects performance. The three types are [WebSocket](https://en.wikipedia.org/wiki/WebSocket), [Server-Sent-Event](https://en.wikipedia.org/wiki/Server-sent_events), and [Long-Polling](https://en.wikipedia.org/wiki/Push_technology).
-
-WebSocket is a bidirectional and full-duplex communication protocol over a single TCP connection. Server-Sent-Event is a unidirectional protocol to push messages from server to client. Long-Polling requires the clients to periodically poll information from the server through an HTTP request. For the same API under the same conditions, WebSocket has the best performance, Server-Sent-Event is slower, and Long-Polling is the slowest. Azure SignalR Service recommends WebSocket by default.
+The following factors affect SignalR performance.
-The message routing cost also limits performance. Azure SignalR Service plays a role as a message router, which routes the message from a set of clients or servers to other clients or servers. A different scenario or API requires a different routing policy.
+* SKU tier (CPU/memory)
+* Number of connections
+* Message size
+* Message send rate
+* Transport type (WebSocket, Server-Sent-Event, or Long-Polling)
+* Use-case scenario (routing cost)
+* App server and service connections (in server mode)
-For **echo**, the client sends a message to itself, and the routing destination is also itself. This pattern has the lowest routing cost. But for **broadcast**, **send to group**, and **send to connection**, Azure SignalR Service needs to look up the target connections through the internal distributed data structure. This extra processing uses more CPU, memory, and network bandwidth. As a result, performance is slower.
+#### Computer resources
-In the default mode, the app server might also become a bottleneck for certain scenarios. The Azure SignalR SDK has to invoke the hub, while it maintains a live connection with every client through heartbeat signals.
+Theoretically, Azure SignalR Service capacity is limited by compute resources: CPU, memory, and network. For example, more connections to Azure SignalR Service cause the service to use more memory. For larger message traffic (for example, every message is larger than 2,048 bytes), Azure SignalR Service needs to spend more CPU cycles to process traffic. Meanwhile, Azure network bandwidth also imposes a limit for maximum traffic.
-In serverless mode, the client sends a message by HTTP post, which is not as efficient as WebSocket.
+#### Transport type
-Another factor is protocol: JSON and [MessagePack](https://msgpack.org/https://docsupdatetracker.net/index.html). MessagePack is smaller in size and delivered faster than JSON. MessagePack might not improve performance, though. The performance of Azure SignalR Service is not sensitive to protocols because it doesn't decode the message payload during message forwarding from clients to servers or vice versa.
+The transport type is another factor that affects performance. The three types are:
-In summary, the following factors affect the inbound and outbound capacity:
+* [WebSocket](https://en.wikipedia.org/wiki/WebSocket): WebSocket is a bidirectional and full-duplex communication protocol over a single TCP connection.
+* [Server-Sent-Event](https://en.wikipedia.org/wiki/Server-sent_events): Server-Sent-Event is a unidirectional protocol to push messages from server to client.
+* [Long-Polling](https://en.wikipedia.org/wiki/Push_technology): Long-Polling requires the clients to periodically poll information from the server through an HTTP request.
-- SKU tier (CPU/memory)
+For the same API under the same conditions, WebSocket has the best performance, Server-Sent-Event is slower, and Long-Polling is the slowest. Azure SignalR Service recommends WebSocket by default.
-- Number of connections
+#### Message routing cost
-- Message size
+The message routing cost also limits performance. Azure SignalR Service plays a role as a message router, which routes the message from a set of clients or servers to other clients or servers. A different scenario or API requires a different routing policy.
-- Message send rate
+For **echo**, the client sends a message to itself, and the routing destination is also itself. This pattern has the lowest routing cost. But for **broadcast**, **send to group**, and **send to connection**, Azure SignalR Service needs to look up the target connections through the internal distributed data structure. This extra processing uses more CPU, memory, and network bandwidth. As a result, performance is slower.
-- Transport type (WebSocket, Server-Sent-Event, or Long-Polling)
+In the default mode, the app server might also become a bottleneck for certain scenarios. The Azure SignalR SDK has to invoke the hub, while it maintains a live connection with every client through heartbeat signals.
-- Use-case scenario (routing cost)
+In serverless mode, the client sends a message by HTTP post, which isn't as efficient as WebSocket.
-- App server and service connections (in server mode)
+#### Protocol
+Another factor is protocol: JSON and [MessagePack](https://msgpack.org/https://docsupdatetracker.net/index.html). MessagePack is smaller in size and delivered faster than JSON. MessagePack might not improve performance, though. The performance of Azure SignalR Service isn't sensitive to protocols because it doesn't decode the message payload during message forwarding from clients to servers or vice versa.
### Finding a proper SKU How can you evaluate the inbound/outbound capacity or find which tier is suitable for a specific use case?
-Assume that the app server is powerful enough and is not the performance bottleneck. Then, check the maximum inbound and outbound bandwidth for every tier.
+Assume that the app server is powerful enough and isn't the performance bottleneck. Then, check the maximum inbound and outbound bandwidth for every tier.
#### Quick evaluation
-Let's simplify the evaluation first by assuming some default settings:
+For a quick evaluation, assume the following default settings:
-- The transport type is WebSocket.-- The message size is 2,048 bytes.-- A message is sent every 1 second.-- Azure SignalR Service is in the default mode.
+* The transport type is WebSocket.
+* The message size is 2,048 bytes.
+* A message is sent every 1 second.
+* Azure SignalR Service is in the default mode.
-Every tier has its own maximum inbound bandwidth and outbound bandwidth. A smooth user experience is not guaranteed after the inbound or outbound connection exceeds the limit.
+Every tier has its own maximum inbound bandwidth and outbound bandwidth. A smooth user experience isn't guaranteed after the inbound or outbound connection exceeds the limit.
**Echo** gives the maximum inbound bandwidth because it has the lowest routing cost. **Broadcast** defines the maximum outbound message bandwidth.
Do *not* exceed the highlighted values in the following two tables.
outboundBandwidth = outboundConnections * messageSize / sendInterval ``` -- *inboundConnections*: The number of connections sending the message.
+* *inboundConnections*: The number of connections sending the message.
-- *outboundConnections*: The number of connections receiving the message.
+* *outboundConnections*: The number of connections receiving the message.
-- *messageSize*: The size of a single message (average value). A small message that's less than 1,024 bytes has a performance impact that's similar to a 1,024-byte message.
+* *messageSize*: The size of a single message (average value). A small message that's less than 1,024 bytes has a performance impact that's similar to a 1,024-byte message.
-- *sendInterval*: The time of sending one message. Typically it's 1 second per message, which means sending one message every second. A smaller interval means sending more message in a time period. For example, 0.5 seconds per message means sending two messages every second.
+* *sendInterval*: The time of sending one message. Typically it's 1 second per message, which means sending one message every second. A smaller interval means sending more message in a time period. For example, 0.5 second per message means sending two messages every second.
-- *Connections*: The committed maximum threshold for Azure SignalR Service for every tier. If the connection number is increased further, it will suffer from connection throttling.
+* *Connections*: The committed maximum threshold for Azure SignalR Service for every tier. If the connection number is increased further, it suffers from connection throttling.
#### Evaluation for complex use cases ##### Bigger message size or different sending rate
-The real use case is more complicated. It might send a message larger than 2,048 bytes, or the sending message rate is not one message per second. Let's take Unit100's broadcast as an example to find how to evaluate its performance.
+The real use case is more complicated. It might send a message larger than 2,048 bytes, or the sending message rate isn't one message per second. Let's take Unit100's broadcast as an example to find how to evaluate its performance.
The following table shows a real use case of **broadcast**. But the message size, connection count, and message sending rate are different from what we assumed in the previous section. The question is how we can deduce any of those items (message size, connection count, or message sending rate) if we know only two of them.
Then pick up the proper tier from the maximum inbound/outbound bandwidth tables.
> [!NOTE] > For sending a message to hundreds or thousands of small groups, or for thousands of clients sending a message to each other, the routing cost will become dominant. Take this impact into account.
-For the use case of sending a message to clients, make sure that the app server is *not* the bottleneck. The following "Case study" section gives guidelines about how many app servers you need and how many server connections you should configure.
+For the use case of sending a message to clients, make sure that the app server isn't* the bottleneck. The following "Case study" section gives guidelines about how many app servers you need and how many server connections you should configure.
## Case study
Even for this simple hub, the traffic pressure on the app server is prominent as
#### Broadcast
-For **broadcast**, when the web app receives the message, it broadcasts to all clients. The more clients there are to broadcast, the more message traffic there is to all clients. See the following diagram.
+For **broadcast**, when the web app receives the message, it broadcasts to all clients. The more clients there are to broadcast, the more message traffic there's to all clients. See the following diagram.
![Traffic for the broadcast use case](./media/signalr-concept-performance/broadcast.png)
The **send to group** use case has a similar traffic pattern to **broadcast**. T
Group member and group count are two factors that affect performance. To simplify the analysis, we define two kinds of groups: -- **Small group**: Every group has 10 connections. The group number is equal to (max
+* **Small group**: Every group has 10 connections. The group number is equal to (max
connection count) / 10. For example, for Unit1, if there are 1,000 connection counts, then we have 1000 / 10 = 100 groups. -- **Big group**: The group number is always 10. The group member count is equal to (max
+* **Big group**: The group number is always 10. The group member count is equal to (max
connection count) / 10. For example, for Unit1, if there are 1,000 connection counts, then every group has 1000 / 10 = 100 members. **Send to group** brings a routing cost to Azure SignalR Service because it has to find the target connections through a distributed data structure. As the sending connections increase, the cost increases.
The following table gives the suggested web app count for ASP.NET SignalR **send
Clients and Azure SignalR Service are involved in serverless mode. Every client stands for a single connection. The client sends messages through the REST API to another client or broadcast messages to all.
-Sending high-density messages through the REST API is not as efficient as using WebSocket. It requires you to build a new HTTP connection every time, and that's an extra cost in serverless mode.
+Sending high-density messages through the REST API isn't as efficient as using WebSocket. It requires you to build a new HTTP connection every time, and that's an extra cost in serverless mode.
#### Broadcast through REST API
-All clients establish WebSocket connections with Azure SignalR Service. Then some clients start broadcasting through the REST API. The message sending (inbound) is all through HTTP Post, which is not efficient compared with WebSocket.
+All clients establish WebSocket connections with Azure SignalR Service. Then some clients start broadcasting through the REST API. The message sending (inbound) is all through HTTP Post, which isn't efficient compared with WebSocket.
| Broadcast through REST API | Unit1 | Unit2 | Unit5 | Unit10 | Unit20 | Unit50 | Unit100 | ||-|-|--|--|--|||
azure-signalr Signalr Howto Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md
Title: Authorize request to SignalR resources with Azure AD from managed identities
+ Title: Authorize managed identity requests to a SignalR resource
description: This article provides information about authorizing request to SignalR resources with Azure AD from managed identities Previously updated : 07/18/2022 Last updated : 03/28/2023 ms.devlang: csharp
-# Authorize request to SignalR resources with Azure AD from managed identities
+# Authorize managed identity requests to a SignalR resource
-Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from Azure resources using [Managed identities for Azure resources
+Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from Azure resources using [managed identities for Azure resources
](../active-directory/managed-identities-azure-resources/overview.md). This article shows how to configure your SignalR resource and code to authorize a managed identity request to a SignalR resource.
This example shows you how to configure `System-assigned managed identity` on a
1. Select the **Save** button to confirm the change.
-To learn how to create user-assigned managed identities, see this article:
-- [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity)
+To learn how to create user-assigned managed identities, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity)
To learn more about configuring managed identities, see one of these articles:
The following steps describe how to assign a `SignalR App Server` role to a syst
1. Select your Azure subscription.
-1. Select **System-assigned managed identity**, search for a virtual machine to which would you'd like to assign the role, and then select it.
+1. Select **System-assigned managed identity**, search for a virtual machine to which you'd like to assign the role, and then select it.
1. On the **Review + assign** tab, select **Review + assign** to assign the role.
To learn more about how to assign and manage Azure role assignments, see these a
#### Using system-assigned identity
-You can use either [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential) or [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential) to configure your SignalR endpoints.
+You can use either [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential) or [ManagedIdentityCredential](/dotnet/api/azure.identity.managedidentitycredential) to configure your SignalR endpoints. However, the best practice is to use `ManagedIdentityCredential` directly.
-However, the best practice is to use `ManagedIdentityCredential` directly.
-
-The system-assigned managed identity will be used by default, but **make sure that you don't configure any environment variables** that the [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) preserved if you were using `DefaultAzureCredential`. Otherwise it will fall back to use `EnvironmentCredential` to make the request and it will result to a `Unauthorized` response in most cases.
+The system-assigned managed identity is used by default, but **make sure that you don't configure any environment variables** that the [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) preserved if you were using `DefaultAzureCredential`. Otherwise it falls back to use `EnvironmentCredential` to make the request and it results to a `Unauthorized` response in most cases.
```C# services.AddSignalR().AddAzureSignalR(option =>
You might need a group of key-value pairs to configure an identity. The keys of
#### Using system-assigned identity
-If you only configure the service URI, then the `DefaultAzureCredential` is used. This class is useful when you want to share the same configuration on Azure and local dev environment. To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential).
+If you only configure the service URI, then the `DefaultAzureCredential` is used. This class is useful when you want to share the same configuration on Azure and local development environments. To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential).
-On Azure portal, use the following example to configure a `DefaultAzureCredential`. If don't configure any [environment variables listed here](/dotnet/api/overview/azure/identity-readme#environment-variables), then the system-assigned identity will be used to authenticate.
+In the Azure portal, use the following example to configure a `DefaultAzureCredential`. If you don't configure any [environment variables listed here](/dotnet/api/overview/azure/identity-readme#environment-variables), then the system-assigned identity is used to authenticate.
``` <CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net ```
-Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` file. At the local scope there's no managed identity, and the authentication via Visual Studio, Azure CLI, and Azure PowerShell accounts will be attempted in order.
+Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` file. At the local scope there's no managed identity, and the authentication via Visual Studio, Azure CLI, and Azure PowerShell accounts are attempted in order.
```json { "Values": {
Here's a config sample of `DefaultAzureCredential` in the `local.settings.json`
} ```
-If you want to use system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), you should set the `credential` key with connection name prefix to `managedidentity`. Here's an application settings sample:
+If you want to use system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), you should set the `credential` key with the connection name prefix to `managedidentity`. Here's an application settings sample:
``` <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net
If you want to use system-assigned identity independently and without the influe
#### Using user-assigned identity
-If you want to use user-assigned identity, you need to assign one more `clientId` key with connection name prefix compared to system-assigned identity. Here's the application settings sample:
+If you want to use user-assigned identity, you need to assign `clientId`in addition to the `serviceUri` and `credential` keys with the connection name prefix. Here's the application settings sample:
+ ``` <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity <CONNECTION_NAME_PREFIX>__clientId = <CLIENT_ID> ```+ ## Next steps See the following related articles:
azure-signalr Signalr Howto Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-key-rotation.md
Title: How to rotate access key for Azure SignalR Service
+ Title: Rotate access keys for Azure SignalR Service
description: An overview on why the customer needs to routinely rotate the access keys and how to do it with the Azure portal GUI and the Azure CLI. Previously updated : 07/18/2022 Last updated : 03/29/2023
-# How to rotate access key for Azure SignalR Service
+# Rotate access keys for Azure SignalR Service
-Each Azure SignalR Service instance has a pair of access keys called Primary and Secondary keys. They're used to authenticate SignalR clients when requests are made to the service. The keys are associated with the instance endpoint URL. Keep your keys secure, and rotate them regularly. You're provided with two access keys so that you can maintain connections by using one key while regenerating the other.
+For security reasons and compliance requirements, it's important to routinely rotate your access keys. This article describes how to rotate access keys for Azure SignalR Service.
-## Why rotate access keys?
+Each Azure SignalR Service instance has a primary and a secondary key. They're used to authenticate SignalR clients when requests are made to the service. The keys are associated with the instance endpoint URL. Keep your keys secure, and rotate them regularly. You're provided with two access keys so that you can maintain connections by using one key while regenerating the other.
-For security reasons and compliance requirements, routinely rotate your access keys.
## Regenerate access keys
-1. Go to the [Azure portal](https://portal.azure.com/), and sign in with your credentials.
-
-1. Find the **Keys** section in the Azure SignalR Service instance with the keys that you want to regenerate.
-
-1. Select **Keys** on the navigation menu.
-
+1. Go to your SignalR instance in the [Azure portal](https://portal.azure.com/).
+1. Select **Keys** on the left side menu.
1. Select **Regenerate Primary Key** or **Regenerate Secondary Key**.
- A new key and corresponding connection string are created and displayed.
+A new key and corresponding connection string are created and displayed.
- ![Regenerate Keys](media/signalr-howto-key-rotation/regenerate-keys.png)
You also can regenerate keys by using the [Azure CLI](/cli/azure/signalr/key#az-signalr-key-renew). ## Update configurations with new connection strings 1. Copy the newly generated connection string.- 1. Update all configurations to use the new connection string.- 1. Restart the application as needed. ## Forced access key regeneration
-Azure SignalR Service might enforce a mandatory access key regeneration under certain situations. The service notifies customers via email and portal notification. If you receive this communication or encounter service failure due to an access key, rotate the keys by following the instructions in this guide.
+The Azure SignalR Service can enforce a mandatory access key regeneration under certain situations. The service notifies customers of mandatory key regeneration via email and portal notification. If you receive this communication or encounter service failure due to an access key, rotate the keys by following the instructions in this guide.
## Next steps
-Rotate your access keys regularly as a good security practice.
-
-In this guide, you learned how to regenerate access keys. Continue to the next tutorials about authentication with OAuth or with Azure Functions.
- > [!div class="nextstepaction"]
-> [Integrate with ASP.NET core identity](./signalr-concept-authenticate-oauth.md)
+> [Azure SignalR Service authentication](./signalr-concept-authenticate-oauth.md)
> [!div class="nextstepaction"] > [Build a serverless real-time app with authentication](./signalr-tutorial-authenticate-azure-functions.md)
azure-signalr Signalr Howto Scale Multi Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-scale-multi-instances.md
ms.devlang: csharp Previously updated : 07/18/2022 Last updated : 03/23/2023
-# How to scale SignalR Service with multiple instances?
+# Scale SignalR Service with multiple instances
+ SignalR Service SDK supports multiple endpoints for SignalR Service instances. You can use this feature to scale the concurrent connections, or use it for cross-region messaging. ## For ASP.NET Core
-### How to add multiple endpoints from config?
+### Add multiple endpoints from config
-Config with key `Azure:SignalR:ConnectionString` or `Azure:SignalR:ConnectionString:` for SignalR Service connection string.
+Configure with key `Azure:SignalR:ConnectionString` or `Azure:SignalR:ConnectionString:` for SignalR Service connection string.
-If the key starts with `Azure:SignalR:ConnectionString:`, it should be in format `Azure:SignalR:ConnectionString:{Name}:{EndpointType}`, where `Name` and `EndpointType` are properties of the `ServiceEndpoint` object, and are accessible from code.
+If the key starts with `Azure:SignalR:ConnectionString:`, it should be in the format `Azure:SignalR:ConnectionString:{Name}:{EndpointType}`, where `Name` and `EndpointType` are properties of the `ServiceEndpoint` object, and are accessible from code.
You can add multiple instance connection strings using the following `dotnet` commands:
dotnet user-secrets set Azure:SignalR:ConnectionString:east-region-b:primary <Co
dotnet user-secrets set Azure:SignalR:ConnectionString:backup:secondary <ConnectionString3> ```
-### How to add multiple endpoints from code?
+### Add multiple endpoints from code
-A `ServicEndpoint` class is introduced to describe the properties of an Azure SignalR Service endpoint.
+A `ServicEndpoint` class describes the properties of an Azure SignalR Service endpoint.
You can configure multiple instance endpoints when using Azure SignalR Service SDK through: ```cs services.AddSignalR()
services.AddSignalR()
}); ```
-### How to customize endpoint router?
+### Customize endpoint router
By default, the SDK uses the [DefaultEndpointRouter](https://github.com/Azure/azure-signalr/blob/dev/src/Microsoft.Azure.SignalR/EndpointRouters/DefaultEndpointRouter.cs) to pick up endpoints. #### Default behavior
-1. Client request routing
+
+1. Client request routing:
When client `/negotiate` with the app server. By default, SDK **randomly selects** one endpoint from the set of available service endpoints.
-2. Server message routing
+2. Server message routing:
- When sending a message to a specific *connection* and the target connection is routed to current server, the message goes directly to that connected endpoint. Otherwise, the messages are broadcasted to every Azure SignalR endpoint.
+ When sending a message to a specific *connection* and the target connection is routed to the current server, the message goes directly to that connected endpoint. Otherwise, the messages are broadcasted to every Azure SignalR endpoint.
#### Customize routing algorithm+ You can create your own router when you have special knowledge to identify which endpoints the messages should go to.
-A custom router is defined below as an example when groups starting with `east-` always go to the endpoint named `east`:
+The following example defines a custom router that routes messages with a group starting with `east-` to the endpoint named `east`:
```cs private class CustomRouter : EndpointRouterDecorator
private class CustomRouter : EndpointRouterDecorator
} ```
-Another example below, that overrides the default negotiate behavior, to select the endpoints depends on where the app server is located.
+The following example overrides the default negotiate behavior and selects the endpoint depending on the location of the app server.
```cs private class CustomRouter : EndpointRouterDecorator
-{
- public override ServiceEndpoint GetNegotiateEndpoint(HttpContext context, IEnumerable<ServiceEndpoint> endpoints)
+{ public override ServiceEndpoint GetNegotiateEndpoint(HttpContext context, IEnumerable<ServiceEndpoint> endpoints)
{ // Override the negotiate behavior to get the endpoint from query string var endpointName = context.Request.Query["endpoint"];
services.AddSignalR()
## For ASP.NET
-### How to add multiple endpoints from config?
+### Add multiple endpoints from config
-Config with key `Azure:SignalR:ConnectionString` or `Azure:SignalR:ConnectionString:` for SignalR Service connection string.
+Configuration with key `Azure:SignalR:ConnectionString` or `Azure:SignalR:ConnectionString:` for SignalR Service connection string.
If the key starts with `Azure:SignalR:ConnectionString:`, it should be in format `Azure:SignalR:ConnectionString:{Name}:{EndpointType}`, where `Name` and `EndpointType` are properties of the `ServiceEndpoint` object, and are accessible from code.
You can add multiple instance connection strings to `web.config`:
</configuration> ```
-### How to add multiple endpoints from code?
+### Add multiple endpoints from code
-A `ServicEndpoint` class is introduced to describe the properties of an Azure SignalR Service endpoint.
+A `ServiceEndpoint` class describes the properties of an Azure SignalR Service endpoint.
You can configure multiple instance endpoints when using Azure SignalR Service SDK through: ```cs
app.MapAzureSignalR(
options => { options.Endpoints = new ServiceEndpoint[] {
- // Note: this is just a demonstration of how to set options.Endpoints
- // Having ConnectionStrings explicitly set inside the code is not encouraged
+ // Note: this is just a demonstration of how to set options. Endpoints
+ // Having ConnectionStrings explicitly set inside the code is not encouraged.
// You can fetch it from a safe place such as Azure KeyVault new ServiceEndpoint("<ConnectionString1>"), new ServiceEndpoint("<ConnectionString2>"),
app.MapAzureSignalR(
}); ```
-### How to customize router?
+### Customize a router
The only difference between ASP.NET SignalR and ASP.NET Core SignalR is the http context type for `GetNegotiateEndpoint`. For ASP.NET SignalR, it is of [IOwinContext](https://github.com/Azure/azure-signalr/blob/dev/src/Microsoft.Azure.SignalR.AspNet/EndpointRouters/DefaultEndpointRouter.cs#L19) type.
-Below is the custom negotiate example for ASP.NET SignalR:
+The following code is a custom negotiate example for ASP.NET SignalR:
```cs private class CustomRouter : EndpointRouterDecorator
app.MapAzureSignalR(GetType().FullName, hub, options => {
## Service Endpoint Metrics
-To enable advanced router, SignalR server SDK provides multiple metrics to help server do smart decision. The properties are under `ServiceEndpoint.EndpointMetrics`.
+To enable an advanced router, SignalR server SDK provides multiple metrics to help server make smart decisions. The properties are under `ServiceEndpoint.EndpointMetrics`.
| Metric Name | Description |
-| -- | -- |
-| `ClientConnectionCount` | Total concurrent connected client connection count on all hubs for the service endpoint |
-| `ServerConnectionCount` | Total concurrent connected server connection count on all hubs for the service endpoint |
+|--|--|
+| `ClientConnectionCount` | Total count of concurrent client connections on all hubs for the service endpoint |
+| `ServerConnectionCount` | Total count of concurrent server connections on all hubs for the service endpoint |
| `ConnectionCapacity` | Total connection quota for the service endpoint, including client and server connections |
-Below is an example to customize router according to `ClientConnectionCount`.
+The following code is an example of customizing a router according to `ClientConnectionCount`.
```cs private class CustomRouter : EndpointRouterDecorator
From SDK version 1.5.0, we're enabling dynamic scale ServiceEndpoints for ASP.NE
> [!NOTE] >
-> Considering the time of connection set-up between server/service and client/service may be different, to ensure no message loss during the scale process, we have a staging period waiting for server connections be ready before open the new ServiceEndpoint to clients. Usually it takes seconds to complete and you'll be able to see log like `Succeed in adding endpoint: '{endpoint}'` which indicates the process complete. But for some unexpected reasons like cross-region network issue or configuration inconsistent on different app servers, the staging period will not be able to finish correctly. Since limited things can be done in these cases, we choose to promote the scale as it is. It's suggested to restart App Server when you find the scaling process not working correctly.
->
-> The default timeout period for the scale is 5 minutes, and it can be customized by changing the value in `ServiceOptions.ServiceScaleTimeout`. If you have a lot of app servers, it's suggested to extend the value a little bit more.
+> Considering the time of connection set-up between server/service and client/service may be different, to ensure no message loss during the scale process, we have a staging period waiting for server connections to be ready before opening the new ServiceEndpoint to clients. Usually it takes seconds to complete and you'll be able to see a log message like `Succeed in adding endpoint: '{endpoint}'` which indicates the process complete.
+>
+> In some expected situations, like cross-region network issues or configuration inconsistencies on different app servers, the staging period may not finish correctly. In these cases, it's suggested to restart the app server when you find the scaling process not working correctly.
+>
+> The default timeout period for the scale is 5 minutes, and it can be customized by changing the value in `ServiceOptions.ServiceScaleTimeout`. If you have a lot of app servers, it's suggested to extend the value a little bit more.
## Configuration in cross-region scenarios The `ServiceEndpoint` object has an `EndpointType` property with value `primary` or `secondary`.
-`primary` endpoints are preferred endpoints to receive client traffic, and are considered to have more reliable network connections; `secondary` endpoints are considered to have less reliable network connections and are used only for taking server to client traffic, for example, broadcasting messages, not for taking client to server traffic.
+Primary endpoints are preferred endpoints to receive client traffic because they've have more reliable network connections. Secondary endpoints have less reliable network connections and are used only for server to client traffic. For example, secondary endpoints are used for broadcasting messages instead of client to server traffic.
-In cross-region cases, network can be unstable. For one app server located in *East US*, the SignalR Service endpoint located in the same *East US* region can be configured as `primary` and endpoints in other regions marked as `secondary`. In this configuration, service endpoints in other regions can **receive** messages from this *East US* app server, but there will be no **cross-region** clients routed to this app server. The architecture is shown in the diagram below:
+In cross-region cases, the network can be unstable. For an app server located in *East US*, the SignalR Service endpoint located in the same *East US* region is `primary` and endpoints in other regions marked as `secondary`. In this configuration, service endpoints in other regions can **receive** messages from this *East US* app server, but no **cross-region** clients are routed to this app server. The following diagram shows the architecture:
![Cross-Geo Infra](./media/signalr-howto-scale-multi-instances/cross_geo_infra.png)
-When a client tries `/negotiate` with the app server, with the default router, SDK **randomly selects** one endpoint from the set of available `primary` endpoints. When the primary endpoint isn't available, SDK then **randomly selects** from all available `secondary` endpoints. The endpoint is marked as **available** when the connection between server and the service endpoint is alive.
+When a client tries `/negotiate` with the app server with a default router, the SDK **randomly selects** one endpoint from the set of available `primary` endpoints. When the primary endpoint isn't available, the SDK then **randomly selects** from all available `secondary` endpoints. The endpoint is marked as **available** when the connection between server and the service endpoint is alive.
-In cross-region scenario, when a client tries `/negotiate` with the app server hosted in *East US*, by default it always returns the `primary` endpoint located in the same region. When all *East US* endpoints aren't available, the client is redirected to endpoints in other regions. Fail over section below describes the scenario in detail.
+In a cross-region scenario, when a client tries `/negotiate` with the app server hosted in *East US*, by default it always returns the `primary` endpoint located in the same region. When all *East US* endpoints aren't available, the router redirects the client to endpoints in other regions. The following [failover](#failover) section describes the scenario in detail.
![Normal Negotiate](./media/signalr-howto-scale-multi-instances/normal_negotiate.png)
-## Fail-over
+## Failover
-When all `primary` endpoints aren't available, client's `/negotiate` picks from the available `secondary` endpoints. This fail-over mechanism requires that each endpoint should serve as `primary` endpoint to at least one app server.
+When no `primary` endpoint is available, the client's `/negotiate` picks from the available `secondary` endpoints. This failover mechanism requires that each endpoint serves as a `primary` endpoint to at least one app server.
-![Fail-over](./media/signalr-howto-scale-multi-instances/failover_negotiate.png)
+![Diagram showing the Failover mechanism process.](./media/signalr-howto-scale-multi-instances/failover_negotiate.png)
## Next steps
-In this guide, you learned about how to configure multiple instances in the same application for scaling, sharding, and cross-region scenarios.
-
-Multiple endpoints supports can also be used in high availability and disaster recovery scenarios.
+You can use multiple endpoints in high availability and disaster recovery scenarios.
> [!div class="nextstepaction"] > [Setup SignalR Service for disaster recovery and high availability](./signalr-concept-disaster-recovery.md)
azure-signalr Signalr Reference Data Plane Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-reference-data-plane-rest-api.md
Previously updated : 06/09/2022 Last updated : 03/29/2023 # Azure SignalR service data plane REST API reference
+In addition to the classic client-server pattern, Azure SignalR Service provides a set of REST APIs so that you can easily integrate real-time functionality into your serverless architecture.
+ > [!NOTE]
->
> Azure SignalR Service only supports using REST API to manage clients connected using ASP.NET Core SignalR. Clients connected using ASP.NET SignalR use a different data protocol that is not currently supported.
-On top of the classical client-server pattern, Azure SignalR Service provides a set of REST APIs so that you can easily integrate real-time functionality into your server-less architecture.
- <a name="serverless"></a>
-## Typical Server-less Architecture with Azure Functions
+## Typical serverless Architecture with Azure Functions
-The following diagram shows a typical server-less architecture using Azure SignalR Service with Azure Functions.
+The following diagram shows a typical serverless architecture using Azure SignalR Service with Azure Functions.
:::image type="content" source="./media/signalr-reference-data-plane-rest-api/serverless-arch.png" alt-text="Diagram of a typical serverless architecture for Azure SignalR service"::: -- `negotiate` function returns a negotiation response and redirects all clients to SignalR Service.-- `broadcast` function calls SignalR Service's REST API. Then SignalR Service will broadcast the message to all connected clients.
+- The `negotiate` function returns a negotiation response and redirects all clients to SignalR Service.
+- The `broadcast` function calls SignalR Service's REST API. The SignalR Service broadcasts the message to all connected clients.
-In a server-less architecture, clients still have persistent connections to the SignalR Service.
+In a serverless architecture, clients still have persistent connections to the SignalR Service.
Since there's no application server to handle traffic, clients are in `LISTEN` mode, which means they can only receive messages but can't send messages.
-SignalR Service will disconnect any client that sends messages because it's an invalid operation.
+SignalR Service disconnects any client that sends messages because it's an invalid operation.
You can find a complete sample of using SignalR Service with Azure Functions at [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/RealtimeSignIn). ## API
-The following table shows all versions of REST API we have for now. You can also find the swagger file for each version of REST API.
+The following table shows all supported versions of REST API. You can also find the swagger file for each version of REST API.
API Version | Status | Port | Doc | Spec ||||
API Version | Status | Port | Doc | Spec
`1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) `1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json)
-The latest available APIs are listed as following.
-
+The available APIs are listed as following.
| API | Path |
-| - | - |
-| [Get service health status.](./swagger/signalr-data-plane-rest-v20220601.md#head-get-service-health-status) | `HEAD /api/health` |
-| [Close all of the connections in the hub.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-all-of-the-connections-in-the-hub) | `POST /api/hubs/{hub}/:closeConnections` |
-| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/hubs/{hub}/:send` |
-| [Check if the connection with the given connectionId exists](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-the-connection-with-the-given-connectionid-exists) | `HEAD /api/hubs/{hub}/connections/{connectionId}` |
-| [Close the client connection](./swagger/signalr-data-plane-rest-v20220601.md#delete-close-the-client-connection) | `DELETE /api/hubs/{hub}/connections/{connectionId}` |
-| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v20220601.md#post-send-message-to-the-specific-connection) | `POST /api/hubs/{hub}/connections/{connectionId}/:send` |
-| [Check if there are any client connections inside the given group](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-there-are-any-client-connections-inside-the-given-group) | `HEAD /api/hubs/{hub}/groups/{group}` |
-| [Close connections in the specific group.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-connections-in-the-specific-group) | `POST /api/hubs/{hub}/groups/{group}/:closeConnections` |
-| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/hubs/{hub}/groups/{group}/:send` |
-| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v20220601.md#put-add-a-connection-to-the-target-group) | `PUT /api/hubs/{hub}/groups/{group}/connections/{connectionId}` |
-| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-connection-from-the-target-group) | `DELETE /api/hubs/{hub}/groups/{group}/connections/{connectionId}` |
-| [Remove a connection from all groups](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-connection-from-all-groups) | `DELETE /api/hubs/{hub}/connections/{connectionId}/groups` |
-| [Check if there are any client connections connected for the given user](./swagger/signalr-data-plane-rest-v20220601.md#head-check-if-there-are-any-client-connections-connected-for-the-given-user) | `HEAD /api/hubs/{hub}/users/{user}` |
-| [Close connections for the specific user.](./swagger/signalr-data-plane-rest-v20220601.md#post-close-connections-for-the-specific-user) | `POST /api/hubs/{hub}/users/{user}/:closeConnections` |
-| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v20220601.md#post-broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/hubs/{hub}/users/{user}/:send` |
-| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v20220601.md#head-check-whether-a-user-exists-in-the-target-group) | `HEAD /api/hubs/{hub}/users/{user}/groups/{group}` |
-| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v20220601.md#put-add-a-user-to-the-target-group) | `PUT /api/hubs/{hub}/users/{user}/groups/{group}` |
-| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-user-from-the-target-group) | `DELETE /api/hubs/{hub}/users/{user}/groups/{group}` |
-| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v20220601.md#delete-remove-a-user-from-all-groups) | `DELETE /api/hubs/{hub}/users/{user}/groups` |
+| - | - |
+| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` |
+| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` |
+| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` |
+| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` |
+| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` |
+| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` |
+| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
+| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` |
+| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` |
+| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` |
## Using REST API
The following claims are required to be included in the JWT token.
Claim Type | Is Required | Description ||
-`aud` | true | Needs to be the same as your HTTP request url, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`.
+`aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`.
`exp` | true | Epoch time when this token expires. ### Authenticate via Azure Active Directory Token (Azure AD Token) Similar to authenticating using `AccessKey`, when authenticating using Azure AD Token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request.
-The difference is, in this scenario the JWT Token is generated by Azure Active Directory.
+The difference is, in this scenario, the JWT Token is generated by Azure Active Directory. For more information, see [Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
-[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md)
-
-You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service.
-
-[Learn how to configure Role-based access control roles for your resource](/azure/azure-signalr/authorize-access-azure-active-directory)
+You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Azure Active Directory for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md)
### Implement Negotiate Endpoint
-As shown in the [architecture section](#serverless), you should implement a `negotiate` function that returns a redirect negotiation response so that client can connect to the service.
+As shown in the [architecture section](#serverless), you should implement a `negotiate` function that returns a redirect negotiation response so that clients can connect to the service.
A typical negotiation response looks as follows: ```json
A typical negotiation response looks as follows:
} ```
-The `accessToken` is generated using the same algorithm described in [authentication section](#authenticate-via-azure-signalr-service-accesskey). The only difference is the `aud` claim should be same as `url`.
-
-You should host your negotiate API in `https://<hub_url>/negotiate` so you can still use SignalR client to connect to the hub url.
+The `accessToken` is generated using the same algorithm described in the [authentication section](#authenticate-via-azure-signalr-service-accesskey). The only difference is the `aud` claim should be the same as `url`.
-Read more about redirecting client to Azure SignalR Service at [here](./signalr-concept-internals.md#client-connections).
+You should host your negotiate API in `https://<hub_url>/negotiate` so you can still use SignalR client to connect to the hub url. Read more about redirecting client to Azure SignalR Service at [here](./signalr-concept-internals.md#client-connections).
### User-related REST API
-In order to call user-related REST API, each of your clients should identify itself to SignalR Service.
-Otherwise SignalR Service can't find target connections from a given user ID.
+In order to the call user-related REST API, each of your clients should identify themselves to SignalR Service. Otherwise SignalR Service can't find target connections from a given user ID.
Client identification can be achieved by including a `nameid` claim in each client's JWT token when they're connecting to SignalR Service.
-Then SignalR Service will use the value of `nameid` claim as the user ID of each client connection.
+Then SignalR Service uses the value of `nameid` claim as the user ID of each client connection.
### Sample
Currently, we have the following limitation for REST API requests:
* Header size is a maximum of 16 KB. * Body size is a maximum of 1 MB.
-If you want to send message larger than 1 MB, use the Management SDK with `persistent` mode.
+If you want to send messages larger than 1 MB, use the Management SDK with `persistent` mode.
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Last updated 11/18/2022
vRealize Operations is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter Server, ESXi, NSX-T Data Center, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter Server, NSX-T Data Center, vSAN, and HCX deployment.
-Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the three typical deployment topologies:
+Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the two typical deployment topologies:
> [!div class="checklist"] > * [On-premises vRealize Operations managing Azure VMware Solution deployment](#on-premises-vrealize-operations-managing-azure-vmware-solution-deployment) > * [vRealize Operations Cloud managing Azure VMware Solution deployment](#vrealize-operations-cloud-managing-azure-vmware-solution-deployment)
-> * [vRealize Operations running on Azure VMware Solution deployment](#vrealize-operations-running-on-azure-vmware-solution-deployment)
## Before you begin * Review the [vRealize Operations Manager product documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) to learn more about deploying vRealize Operations.
VMware vRealize Operations Cloud supports the Azure VMware Solution, including t
> [!IMPORTANT] > Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations/Cloud/com.vmware.vcom.config.doc/GUID-6CDFEDDC-A72C-4AB4-B8E8-84542CC6CE27.html) for step-by-step guide for connecting vRealize Operations Cloud to Azure VMware Solution.
-## vRealize Operations running on Azure VMware Solution deployment
-
-Another option is to deploy an instance of vRealize Operations Manager on a vSphere cluster in the private cloud.
-
->[!IMPORTANT]
->This option isn't currently supported by VMware.
--
-Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter Server, ESXi, NSX-T Data Center, vSAN, and HCX.
- ## Known limitations - The **cloudadmin@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-identity.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case.
backup Move To Azure Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/move-to-azure-monitor-alerts.md
Title: Switch to Azure Monitor based alerts for Azure Backup description: This article describes the new and improved alerting capabilities via Azure Monitor and the process to configure Azure Monitor. Previously updated : 09/14/2022 Last updated : 03/31/2023
The following table lists the differences between classic backup alerts and buil
| **Notification suppression for database backup scenarios** | When there are multiple failures for the same database due to the same error code, a single alert is generated (with the occurrence count updated for each failure type) and a new alert is only generated when the original alert is inactivated. | The behavior is currently different. Here, a separate alert is generated for every backup failure. If there's a window of time when backups will fail for a certain known item (for example, during a maintenance window), you can create a suppression rule to suppress email noise for that backup item during the given period. | | **Pricing** | There are no additional charges for this solution. | Alerts for critical operations/failures generate by default (that you can view in the Azure portal or via non-portal interfaces) at no additional charge. However, to route these alerts to a notification channel (such as email), it incurs a minor charge for notifications beyond the *free tier* (of 1000 emails per month). Learn more about [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/). |
+> [!NOTE]
+>- If you've existing custom Azure Resource Graph (ARG) queries written on classic alerts data, you'll need to update these queries to fetch information from Azure Monitor-based alerts. You can use the *AlertsManagementResources* table in ARG to query Azure Monitor alerts data.
+>- If you send classic alerts to Log Analytics workspace/Storage account/Event Hub via diagnostics settings, you'll also need to update these automation. To send the fired Azure Monitor based alerts to a destination of your choice, you can create an alert processing rule and action group that routes these alerts to a logic app, webhook, or runbook that in turn sends these alerts to the required destination.
+ Azure Backup now provides a guided experience via Backup center that allows you to switch to built-in Azure Monitor alerts and notifications with just a few selects. To perform this action, you need to have access to the *Backup Contributor* and *Monitoring Contributor* Azure role-based access control (Azure RBAC) roles to the subscription. Follow these steps:
Follow these steps:
## Suppress notifications during a planned maintenance window
-For certain scenarios, you might want to suppress notifications for a particular window of time when backups are going to fail. This is especially important for database backups, where log backups could happen as frequently as every 15 minutes, and you don't want to receive a separate notification every 15 minutes for each failure occurrence. In such a scenario, you can create a second alert processing rule that exists alongside the main alert processing rule used for sending notifications. The second alert processing rule won't be linked to an action group, but is used to specify the time for notification types tha notification should be suppressed.
+For certain scenarios, you might want to suppress notifications for a particular window of time when backups are going to fail. This is especially important for database backups, where log backups could happen as frequently as every 15 minutes, and you don't want to receive a separate notification every 15 minutes for each failure occurrence. In such a scenario, you can create a second alert processing rule that exists alongside the main alert processing rule used for sending notifications. The second alert processing rule won't be linked to an action group, but is used to specify the time for notification types that should be suppressed.
By default, the suppression alert processing rule takes priority over the other alert processing rule. If a single fired alert is affected by different alert processing rules of both types, the action groups of that alert will be suppressed.
To create a suppression alert processing rule, follow these steps:
1. Select **Scope**, for example, subscription or resource group, that the alert processing rule should span.
- You can also select more granular filters if you want to suppress notifications only for a particular backup item. For example, if you want to suppress notifications for *testdb1* database within Virtual Machine *VM1*, you can specify filters "where Alert Context (payload) contains /subscriptions/00000000-0000-0000-0000-0000000000000/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachines/VM1/providers/Microsoft.RecoveryServices/backupProtectedItem/SQLDataBase;MSSQLSERVER;testdb1".
+ You can also select more granular filters if you want to suppress notifications only for a particular backup item. For example, if you want to suppress notifications for *testdb1* database in the Virtual Machine *VM1*, you can specify filters "where Alert Context (payload) contains `/subscriptions/00000000-0000-0000-0000-0000000000000/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachines/VM1/providers/Microsoft.RecoveryServices/backupProtectedItem/SQLDataBase;MSSQLSERVER;testdb1`".
To get the required format of your required backup item, see the *SourceId field* from the [Alert details page](backup-azure-monitoring-built-in-monitor.md?tabs=recovery-services-vaults#viewing-fired-alerts-in-the-azure-portal).
To configure the same, run the following commands:
``` ## Next steps
-Learn more about [Azure Backup monitoring and reporting](monitoring-and-alerts-overview.md).
------
-
+Learn more about [Azure Backup monitoring and reporting](monitoring-and-alerts-overview.md).
cognitive-services Copy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/how-to/copy-model.md
+
+ Title: Copy a custom model
+
+description: This article explains how to copy a custom model to another workspace using the Azure Cognitive Services Custom Translator.
++++ Last updated : 03/31/2023+++
+# Copy a custom model
+
+Copying a model to other workspaces enables model lifecycle management (for example, development → test → production) and increases usage scalability while reducing the training cost.
+
+## Copy model to another workspace
+
+ > [!Note]
+ >
+ > To copy model from one workspace to another, you must have an **Owner** role in both workspaces.
+ >
+ > The copied model cannot be recopied. You can only rename, delete, or publish a copied model.
+
+1. After successful model training, select the **Model details** blade.
+
+1. Select the **Model Name** to copy.
+
+1. Select **Copy to workspace**.
+
+1. Fill out the target details.
+
+1. Select **Copy model**.
+
+1. A notification panel shows the copy progress. The process should complete fairly quickly:
+
+1. Complete the **workspace**, **project**, and **model name** sections of the copy model dialog window:
+
+ :::image type="content" source="../media/how-to/copy-model-1.png" alt-text="Screenshot illustrating the copy model dialog window.":::
+
+1. A **notifications** window displays the copy process status:
+
+ :::image type="content" source="../media/how-to/copy-model-2.png" alt-text="Screenshot illustrating notification that the copy model is in process.":::
+
+1. A **model details** window appears when the copy process is complete.
+
+ :::image type="content" source="../media/how-to/copy-model-3.png" alt-text="Screenshot illustrating the copy complete dialog window.":::
+
+ > [!Note]
+ >
+ > A dropdown list displays the list of workspaces available to use. Otherwise, click **Create a new workspace**.
+ > If selected workspace contains a project for the same language pair, it can be selected from the Project dropdown list, otherwise, click **Create a new project** to create one.
+
+1. After **Copy model** completion, a copied model is available in the target workspace and ready to publish. A **Copied model** watermark is appended to the model name.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to publish/deploy a custom model](publish-model.md).
cognitive-services Platform Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/platform-upgrade.md
+
+ Title: "Platform upgrade - Custom Translator"
+
+description: Custom Translator v1.0 upgrade
++++ Last updated : 03/30/2023+++
+# Custom Translator platform upgrade
+
+> [!CAUTION]
+>
+> On June 02, 2023, Microsoft will retire the Custom Translator v1.0 model platform. Existing v1.0 models must migrate to the v2.0 platform for continued processing and support.
+
+Following measured and consistent high-quality results using models trained on the Custom Translator v2.0 platform, the v1.0 platform will be retired. Custom Translator v2.0 delivers significant improvements in many domains compared to both standard and Custom v1.0 platform translations. Migrate your v1.0 models to the v2.0 platform by June 02, 2023.
+
+## Custom Translator v1.0 upgrade timeline
+
+* **May 01, 2023** → Custom Translator v1.0 model publishing ends. There's no downtime during the v1.0 model migration. All model publishing and in-flight translation requests will continue without disruption until June 02, 2023.
+
+* **May 01, 2023 through June 02, 2023** → Customers voluntarily migrate to v2.0 models.
+
+* **June 08, 2023** → Remaining v1.0 published models migrate automatically and are published by the Custom Translator team.
+
+## Upgrade to v2.0
+
+* **Check to see if you have published v1.0 models**. After signing in to the Custom Translator portal, you'll see a message indicating that you have v1.0 models to upgrade. You can also check to see if a current workspace has v1.0 models by selecting **Workspace settings** and scrolling to the bottom of the page.
+
+* **Use the upgrade wizard**. Follow the steps listed in **Upgrade to the latest version** wizard. Depending on your training data size, it may take from a few hours to a full day to upgrade your models to the v2.0 platform.
+
+## Unpublished and opt-out published models
+
+* For unpublished models, save the model data (training, testing, dictionary) and delete the project.
+
+* For published models that you don't want to upgrade, save your model data (training, testing, dictionary), unpublish the model, and delete the project.
+
+## Next steps
+
+For more support, visit [Azure Cognitive Services support and help options](../../cognitive-services-support-options.md).
cognitive-services Cognitive Services Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-data-loss-prevention.md
Previously updated : 07/02/2021 #Required; mm/dd/yyyy format. Last updated : 03/31/2023 #Required; mm/dd/yyyy format.
There are two parts to enable data loss prevention. First the property restrictO
The following services support data loss prevention configuration:
+- Azure OpenAI
- Computer Vision - Content Moderator - Custom Vision
cognitive-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/model-lifecycle.md
Use the table below to find which model versions are supported by each feature:
| Sentiment Analysis and opinion mining | `2021-10-01`, `2022-06-01`,`2022-10-01`,`2022-11-01*` | | Language Detection | `2021-11-20`, `2022-10-01*` | | Entity Linking | `2021-06-01*` |
-| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview` |
+| Named Entity Recognition (NER) | `2021-06-01*`, `2022-10-01-preview`, `2023-02-01-preview**` |
| Personally Identifiable Information (PII) detection | `2020-07-01`, `2021-01-15*`, `2023-01-01-preview**` | | PII detection for conversations (Preview) | `2022-05-15-preview**` | | Question answering | `2021-10-01*` |
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
* China East 2 (Authoring and Prediction) * China North 2 (Prediction) * New model evaluation updates for Conversational language understanding and Orchestration workflow.
-* New model version ('2023-01-01-preview') for Text Analytics for health featuring new [entity categories](./text-analytics-for-health/concepts/health-entity-categories.md) for social determinants of health
+* New model version ('2023-01-01-preview') for Text Analytics for health featuring new [entity categories](./text-analytics-for-health/concepts/health-entity-categories.md) for social determinants of health.
+* New model version ('2023-02-01-preview') for named entity recognition features improved accuracy.
## December 2022
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Title: 'Quickstart: Deploy your first container app'
-description: Deploy your first application to Azure Container Apps.
+ Title: 'Quickstart: Deploy your first container app with containerapp up'
+description: Deploy your first application to Azure Container Apps using the Azure CLI containerapp up command.
-+ Previously updated : 03/21/2022-- Last updated : 03/29/2023++ ms.devlang: azurecli
-# Quickstart: Deploy your first container app
+# Quickstart: Deploy your first container app with containerapp up
The Azure Container Apps service enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
-In this quickstart, you create a secure Container Apps environment and deploy your first container app.
+In this quickstart, you create and deploy your first container app using the `az containerapp up` command.
## Prerequisites
In this quickstart, you create a secure Container Apps environment and deploy yo
- If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). - Install the [Azure CLI](/cli/azure/install-azure-cli).
+## Setup
-# [Bash](#tab/bash)
+To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process.
-To create the environment, run the following command:
+# [Bash](#tab/bash)
```azurecli
-az containerapp env create \
- --name $CONTAINERAPPS_ENVIRONMENT \
- --resource-group $RESOURCE_GROUP \
- --location $LOCATION
+az login
``` # [Azure PowerShell](#tab/azure-powershell)
-A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
- ```azurepowershell
-$WorkspaceArgs = @{
- Name = 'myworkspace'
- ResourceGroupName = $ResourceGroupName
- Location = $Location
- PublicNetworkAccessForIngestion = 'Enabled'
- PublicNetworkAccessForQuery = 'Enabled'
-}
-New-AzOperationalInsightsWorkspace @WorkspaceArgs
-$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
-$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+az login
```
-To create the environment, run the following command:
++
+Ensure you're running the latest version of the CLI via the upgrade command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az upgrade
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
```azurepowershell
-$EnvArgs = @{
- EnvName = $ContainerAppsEnvironment
- ResourceGroupName = $ResourceGroupName
- Location = $Location
- AppLogConfigurationDestination = 'log-analytics'
- LogAnalyticConfigurationCustomerId = $WorkspaceId
- LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
-}
-
-New-AzContainerAppManagedEnv @EnvArgs
+az upgrade
```
-## Create a container app
-
-Now that you have an environment created, you can deploy your first container app. With the `containerapp create` command, deploy a container image to Azure Container Apps.
+Next, install or update the Azure Container Apps extension for the CLI.
# [Bash](#tab/bash) ```azurecli
-az containerapp create \
- --name my-container-app \
- --resource-group $RESOURCE_GROUP \
- --environment $CONTAINERAPPS_ENVIRONMENT \
- --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
- --target-port 80 \
- --ingress 'external' \
- --query properties.configuration.ingress.fqdn
+az extension add --name containerapp --upgrade
```
-> [!NOTE]
-> Make sure the value for the `--image` parameter is in lower case.
-
-By setting `--ingress` to `external`, you make the container app available to public requests.
- # [Azure PowerShell](#tab/azure-powershell) + ```azurepowershell
-$ImageParams = @{
- Name = 'my-container-app'
- Image = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
-}
-$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
-$EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
-
-$AppArgs = @{
- Name = 'my-container-app'
- Location = $Location
- ResourceGroupName = $ResourceGroupName
- ManagedEnvironmentId = $EnvId
- IdentityType = 'SystemAssigned'
- TemplateContainer = $TemplateObj
- IngressTargetPort = 80
- IngressExternal = $true
-
-}
-New-AzContainerApp @AppArgs
+az extension add --name containerapp --upgrade
```
-> [!NOTE]
-> Make sure the value for the `Image` parameter is in lower case.
-
-By setting `IngressExternal` to `$true`, you make the container app available to public requests.
-
-## Verify deployment
+Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces if you haven't already registered them in your Azure subscription.
# [Bash](#tab/bash)
-The `create` command returns the fully qualified domain name for the container app. Copy this location to a web browser.
+```azurecli
+az provider register --namespace Microsoft.App
+```
-# [Azure PowerShell](#tab/azure-powershell)
+```azurecli
+az provider register --namespace Microsoft.OperationalInsights
+```
-Get the fully qualified domain name for the container app.
+# [Azure PowerShell](#tab/azure-powershell)
```azurepowershell
-(Get-AzContainerApp -Name $AppArgs.Name -ResourceGroupName $ResourceGroupName).IngressFqdn
+az provider register --namespace Microsoft.App
```
-Copy this location to a web browser.
+```azurepowershell
+az provider register --namespace Microsoft.OperationalInsights
+```
- The following message is displayed when the container app is deployed:
+Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
-## Clean up resources
+## Create and deploy the container app
-If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
+Create and deploy your first container app with the `containerapp up` command. This command will:
+
+- Create the resource group
+- Create the Container Apps environment
+- Create the Log Analytics workspace
+- Create and deploy the container app using a public container image
+
+Note that if any of these resources already exist, the command will use them instead of creating new ones.
->[!CAUTION]
-> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
# [Bash](#tab/bash) ```azurecli
-az group delete --name $RESOURCE_GROUP
+az containerapp up \
+ --name my-container-app \
+ --resource-group my-container-apps \
+ --location centralus \
+ --environment 'my-container-apps' \
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --target-port 80 \
+ --ingress external \
+ --query properties.configuration.ingress.fqdn
``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-Remove-AzResourceGroup -Name $ResourceGroupName -Force
+```powershell
+az containerapp up `
+ --name my-container-app `
+ --resource-group my-container-apps `
+ --location centralus `
+ --environment my-container-apps `
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest `
+ --target-port 80 `
+ --ingress external `
+ --query properties.configuration.ingress.fqdn
```
+> [!NOTE]
+> Make sure the value for the `--image` parameter is in lower case.
+
+By setting `--ingress` to `external`, you make the container app available to public requests.
+
+## Verify deployment
+
+The `up` command returns the fully qualified domain name for the container app. Copy this location to a web browser.
+
+The following message is displayed when the container app is deployed:
++
+## Clean up resources
+
+If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this quickstart exist in the specified resource group, they will also be deleted.
++
+```azurecli
+az group delete --name my-container-apps
+```
+ > [!TIP] > Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
Title: "Quickstart: Deploy your code to Azure Container Apps"
-description: Code to cloud deploying your application to Azure Container Apps
+ Title: "Quickstart: Build and deploy your app from a repository to Azure Container Apps"
+description: Build your container app from a local or GitHub source repository and deploy in Azure Container Apps using az containerapp up.
-+ Previously updated : 05/11/2022-
-zone_pivot_groups: container-apps-image-build-type
Last updated : 03/29/2023+
+zone_pivot_groups: container-apps-image-build-from-repo
-# Quickstart: Deploy your code to Azure Container Apps
+
+# Quickstart: Build and deploy your container app from a repository in Azure Container Apps
This article demonstrates how to build and deploy a microservice to Azure Container Apps from a source repository using the programming language of your choice.
-This quickstart is the first in a series of articles that walk you through how to use core capabilities within Azure Container Apps. The first step is to create a back end web API service that returns a static collection of music albums.
+In this quickstart, you create a backend web API service that returns a static collection of music albums. After completing this quickstart, you can continue to [Tutorial: Communication between microservices in Azure Container Apps](communicate-between-microservices.md) to learn how to deploy a front end application that calls the API.
+
+> [!NOTE]
+> You can also build and deploy this sample application using the `az containerapp up` command. For more information, see [Tutorial: Build and deploy your app to Azure Container Apps](tutorial-code-to-cloud.md).
-The following screenshot shows the output from the album API deployed in this quickstart.
+The following screenshot shows the output from the album API service you deploy.
:::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint."::: ## Prerequisites
-To complete this project, you'll need the following items:
+To complete this project, you need the following items:
+ | Requirement | Instructions | |--|--| | Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
-| GitHub Account | Get an account for [free](https://github.com/join). |
+| GitHub Account | Get one for [free](https://github.com/join). |
| git | [Install git](https://git-scm.com/downloads) | | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).| ::: zone-end | Requirement | Instructions | |--|--|
-| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
-| GitHub Account | Get an account for [free](https://github.com/join). |
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Get one for [free](https://github.com/join). |
| git | [Install git](https://git-scm.com/downloads) | | Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
-| Docker Desktop | Docker provides installers that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). <br><br>From your command prompt, type `docker` to ensure Docker is running. |
::: zone-end -
-Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
--
-## Prepare the GitHub repository
-
-Navigate to the repository for your preferred language and fork the repository.
-
-# [C#](#tab/csharp)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account.
+## Setup
-Now you can clone your fork of the sample repository.
+To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process.
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Bash](#tab/bash)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-csharp.git code-to-cloud
+```azurecli
+az login
```
-# [Go](#tab/go)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account.
-
-Now you can clone your fork of the sample repository.
-
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Azure PowerShell](#tab/azure-powershell)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-go.git code-to-cloud
+```azurepowershell
+az login
```
-# [JavaScript](#tab/javascript)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account.
+
-Now you can clone your fork of the sample repository.
+Ensure you're running the latest version of the CLI via the upgrade command.
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Bash](#tab/bash)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-javascript.git code-to-cloud
+```azurecli
+az upgrade
```
-# [Python](#tab/python)
-
-Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account.
-
-Now you can clone your fork of the sample repository.
-
-Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+# [Azure PowerShell](#tab/azure-powershell)
-```git
-git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-python.git code-to-cloud
+```azurepowershell
+az upgrade
```
-Next, change the directory into the root of the cloned repo.
-
-```console
-cd code-to-cloud/src
-```
-
-## Create an Azure Resource Group
-
-Create a resource group to organize the services related to your container app deployment.
+Next, install or update the Azure Container Apps extension for the CLI.
# [Bash](#tab/bash) ```azurecli
-az group create \
- --name $RESOURCE_GROUP \
- --location "$LOCATION"
+az extension add --name containerapp --upgrade
``` # [Azure PowerShell](#tab/azure-powershell) + ```azurepowershell
-New-AzResourceGroup -Location $Location -Name $ResourceGroup
+az extension add --name containerapp --upgrade
```
-## Create an Azure Container Registry
-
-Next, create an Azure Container Registry (ACR) instance in your resource group to store the album API container image once it's built.
+Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces if you haven't already registered them in your Azure subscription.
# [Bash](#tab/bash) ```azurecli
-az acr create \
- --resource-group $RESOURCE_GROUP \
- --name $ACR_NAME \
- --sku Basic \
- --admin-enabled true
+az provider register --namespace Microsoft.App
+```
+
+```azurecli
+az provider register --namespace Microsoft.OperationalInsights
``` # [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-$acr = New-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $ACRName -Sku Basic -EnableAdminUser
+az provider register --namespace Microsoft.App
+```
+
+```azurepowershell
+az provider register --namespace Microsoft.OperationalInsights
```
+Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
+
-## Build your application
+# [Bash](#tab/bash)
-With [ACR tasks](../container-registry/container-registry-tasks-overview.md), you can build and push the docker image for the album API without installing Docker locally.
+Define the following variables in your bash shell.
-### Build the container with ACR
+```azurecli
+RESOURCE_GROUP="album-containerapps"
+LOCATION="canadacentral"
+ENVIRONMENT="env-album-containerapps"
+API_NAME="album-api"
+FRONTEND_NAME="album-ui"
+GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
+```
-Run the following command to initiate the image build and push process using ACR. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+Before you run this command, make sure to replace `<YOUR_GITHUB_USERNAME>` with your GitHub username.
-# [Bash](#tab/bash)
+Next, define a container registry name unique to you.
```azurecli
-az acr build --registry $ACR_NAME --image $API_NAME .
+ACR_NAME="acaalbums"$GITHUB_USERNAME
``` # [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-az acr build --registry $ACRName --image $APIName .
-```
+Define the following variables in your PowerShell console.
--
-Output from the `az acr build` command shows the upload progress of the source code to Azure and the details of the `docker build` and `docker push` operations.
+```powershell
+$RESOURCE_GROUP="album-containerapps"
+$LOCATION="canadacentral"
+$ENVIRONMENT="env-album-containerapps"
+$API_NAME="album-api"
+$FRONTEND_NAME="album-ui"
+$GITHUB_USERNAME="<YOUR_GITHUB_USERNAME>"
+```
+Before you run this command, make sure to replace `<YOUR_GITHUB_USERNAME>` with your GitHub username.
+Next, define a container registry name unique to you.
-## Build your application
+```powershell
+$ACR_NAME="acaalbums"+$GITHUB_USERNAME
+```
-The following steps, demonstrate how to build your container image locally using Docker and push the image to the new container registry.
+
-### Build the container with Docker
-The following command builds a container image for the album API and tags it with the fully qualified name of the ACR login server. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
# [Bash](#tab/bash)
+Define the following variables in your bash shell.
+ ```azurecli
-docker build --tag $ACR_NAME.azurecr.io/$API_NAME .
+RESOURCE_GROUP="album-containerapps"
+LOCATION="canadacentral"
+ENVIRONMENT="env-album-containerapps"
+API_NAME="album-api"
``` # [Azure PowerShell](#tab/azure-powershell)
+Define the following variables in your PowerShell console.
+ ```powershell
-docker build --tag "$ACRName.azurecr.io/$APIName" .
+$RESOURCE_GROUP="album-containerapps"
+$LOCATION="canadacentral"
+$ENVIRONMENT="env-album-containerapps"
+$API_NAME="album-api"
```
-### Push the image to your container registry
-First, sign in to your Azure Container Registry.
+## Prepare the GitHub repository
-# [Bash](#tab/bash)
+In a browser window, go to the GitHub repository for your preferred language and fork the repository.
-```azurecli
-az acr login --name $ACR_NAME
-```
+# [C#](#tab/csharp)
-# [Azure PowerShell](#tab/azure-powershell)
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account.
-```powershell
-az acr login --name $ACRName
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-csharp.git code-to-cloud
``` -
-Now, push the image to your registry.
+# [Go](#tab/go)
-# [Bash](#tab/bash)
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account.
-```azurecli
-docker push $ACR_NAME.azurecr.io/$API_NAME
-```
-# [Azure PowerShell](#tab/azure-powershell)
+Now you can clone your fork of the sample repository.
-```powershell
-docker push "$ACRName.azurecr.io/$APIName"
-```
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
-
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-go.git code-to-cloud
+```
::: zone-end
-## Create a Container Apps environment
+# [JavaScript](#tab/javascript)
-The Azure Container Apps environment acts as a secure boundary around a group of container apps.
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account.
-Create the Container Apps environment using the following command.
-# [Bash](#tab/bash)
+Now you can clone your fork of the sample repository.
-```azurecli
-az containerapp env create \
- --name $ENVIRONMENT \
- --resource-group $RESOURCE_GROUP \
- --location "$LOCATION"
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-javascript.git code-to-cloud
```
-# [Azure PowerShell](#tab/azure-powershell)
-A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
+# [Python](#tab/python)
-```azurepowershell
-$WorkspaceArgs = @{
- Name = 'my-album-workspace'
- ResourceGroupName = $ResourceGroup
- Location = $Location
- PublicNetworkAccessForIngestion = 'Enabled'
- PublicNetworkAccessForQuery = 'Enabled'
-}
-New-AzOperationalInsightsWorkspace @WorkspaceArgs
-$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroup -Name $WorkspaceArgs.Name).CustomerId
-$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroup -Name $WorkspaceArgs.Name).PrimarySharedKey
-```
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account.
-To create the environment, run the following command:
-```azurepowershell
-$EnvArgs = @{
- EnvName = $Environment
- ResourceGroupName = $ResourceGroup
- Location = $Location
- AppLogConfigurationDestination = 'log-analytics'
- LogAnalyticConfigurationCustomerId = $WorkspaceId
- LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
-}
-
-New-AzContainerAppManagedEnv @EnvArgs
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-python.git code-to-cloud
``` +
-## Deploy your image to a container app
-Now that you have an environment created, you can create and deploy your container app with the `az containerapp create` command.
+## Build and deploy the container app
-Create and deploy your container app with the following command.
+Build and deploy your first container app from your local git repository with the `containerapp up` command. This command will:
+
+- Create the resource group
+- Create an Azure Container Registry
+- Build the container image and push it to the registry
+- Create the Container Apps environment with a Log Analytics workspace
+- Create and deploy the container app using a public container image
+
+The `up` command uses the Docker file in the root of the repository to build the container image. The target port is defined by the EXPOSE instruction in the Docker file. A Docker file isn't required to build a container app.
# [Bash](#tab/bash) ```azurecli
-az containerapp create \
+az containerapp up \
--name $API_NAME \ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
--environment $ENVIRONMENT \
- --image $ACR_NAME.azurecr.io/$API_NAME \
- --target-port 3500 \
- --ingress 'external' \
- --registry-server $ACR_NAME.azurecr.io \
- --query properties.configuration.ingress.fqdn
+ --source code-to-cloud/src
```
-* By setting `--ingress` to `external`, your container app will be accessible from the public internet.
+# [Azure PowerShell](#tab/azure-powershell)
-* The `target-port` is set to `3500` to match the port that the container is listening to for requests.
+```powershell
+az containerapp up `
+ --name $API_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION `
+ --environment $ENVIRONMENT `
+ --source code-to-cloud/src
+```
-* Without a `query` property, the call to `az containerapp create` returns a JSON response that includes a rich set of details about the application. Adding a query parameter filters the output to just the app's fully qualified domain name (FQDN).
+
-# [Azure PowerShell](#tab/azure-powershell)
-To create the container app, create template objects that you'll pass in as arguments to the `New-AzContainerApp` command.
+## Build and deploy the container app
-Create a template object to define your container image parameters.
+Build and deploy your first container app from your forked GitHub repository with the `containerapp up` command. This command will:
-```azurepowershell
-$ImageParams = @{
- Name = $APIName
- Image = $ACRName + '.azurecr.io/' + $APIName + ':latest'
-}
-$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
-```
+- Create the resource group
+- Create an Azure Container Registry
+- Build the container image and push it to the registry
+- Create the Container Apps environment with a Log Analytics workspace
+- Create and deploy the container app using a public container image
+- Create a GitHub Action workflow to build and deploy the container app
-You'll need run the following command to get your registry credentials.
+The `up` command uses the Docker file in the root of the repository to build the container image. The target port is defined by the EXPOSE instruction in the Docker file. A Docker file isn't required to build a container app.
-```azurepowershell
-$RegistryCredentials = Get-AzContainerRegistryCredential -Name $ACRName -ResourceGroupName $ResourceGroup
-```
+Replace the `<YOUR_GITHUB_REPOSITORY_NAME>` with your GitHub repository name in the form of `https://github.com/<owner>/<repository-name>` or `<owner>/<repository-name>`.
-Create a registry credential object to define your registry information, and a secret object to define your registry password. The `PasswordSecretRef` refers to the `Name` in the secret object.
+# [Bash](#tab/bash)
-```azurepowershell
-$RegistryArgs = @{
- Server = $ACRName + '.azurecr.io'
- PasswordSecretRef = 'registrysecret'
- Username = $RegistryCredentials.Username
-}
-$RegistryObj = New-AzContainerAppRegistryCredentialObject @RegistryArgs
-
-$SecretObj = New-AzContainerAppSecretObject -Name 'registrysecret' -Value $RegistryCredentials.Password
+```azurecli
+az containerapp up \
+ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --environment $ENVIRONMENT \
+ --context-path ./src \
+ --repo <YOUR_GITHUB_REPOSITORY_NAME>
```
-Get your environment ID.
+# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-$EnvId = (Get-AzContainerAppManagedEnv -EnvName $Environment -ResourceGroup $ResourceGroup).Id
+```powershell
+az containerapp up `
+ --name $API_NAME `
+ --resource-group $RESOURCE_GROUP `
+ --location $LOCATION `
+ --environment $ENVIRONMENT `
+ --context-path ./src `
+ --repo <YOUR_GITHUB_REPOSITORY_NAME>
```
-Create the container app.
+
-```azurepowershell
-$AppArgs = @{
- Name = $APIName
- Location = $Location
- ResourceGroupName = $ResourceGroup
- ManagedEnvironmentId = $EnvId
- TemplateContainer = $TemplateObj
- ConfigurationRegistry = $RegistryObj
- ConfigurationSecret = $SecretObj
- IngressTargetPort = 3500
- IngressExternal = $true
-}
-$MyApp = New-AzContainerApp @AppArgs
-
-# show the app's fully qualified domain name (FQDN).
-$MyApp.IngressFqdn
-```
+Using the URL and the user code displayed in the terminal, go to the GitHub device activation page in a browser and enter the user code to the page. Follow the prompts to authorize the Azure CLI to access your GitHub repository.
+
-* By setting `IngressExternal` to `external`, your container app will be accessible from the public internet.
-* The `IngressTargetPort` parameter is set to `3500` to match the port that the container is listening to for requests.
+The `up` command creates a GitHub Action workflow in your repository *.github/workflows* folder. The workflow is triggered to build and deploy your container app when you push changes to the repository.
## Verify deployment
-Copy the FQDN to a web browser. From your web browser, navigate to the `/albums` endpoint of the FQDN.
+Copy the FQDN to a web browser. From your web browser, go to the `/albums` endpoint of the FQDN.
:::image type="content" source="media/quickstart-code-to-cloud/azure-container-apps-album-api.png" alt-text="Screenshot of response from albums API endpoint."::: ## Clean up resources
-If you're not going to continue on to the [Communication between microservices](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart. Run the following command to delete the resource group along with all the resources created in this quickstart.
+If you're not going to continue on to the [Deploy a frontend](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart with the following command.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If the group contains resources outside the scope of this quickstart, they are also deleted.
# [Bash](#tab/bash)
az group delete --name $RESOURCE_GROUP
# [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell
-Remove-AzResourceGroup -Name $ResourceGroup -Force
+```powershell
+az group delete --name $RESOURCE_GROUP
```
Remove-AzResourceGroup -Name $ResourceGroup -Force
## Next steps
-This quickstart is the entrypoint for a set of progressive tutorials that showcase the various features within Azure Container Apps. Continue on to learn how to enable communication from a web front end that calls the API you deployed in this article.
- > [!div class="nextstepaction"] > [Tutorial: Communication between microservices](communicate-between-microservices.md)
container-apps Tutorial Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-code-to-cloud.md
+
+ Title: "Tutorial: Build and deploy your app to Azure Container Apps"
+description: Build and deploy your app to Azure Container Apps with az containerapp create command.
++++ Last updated : 05/11/2022+
+zone_pivot_groups: container-apps-image-build-type
+++
+# Tutorial: Build and deploy your app to Azure Container Apps
+
+This article demonstrates how to build and deploy a microservice to Azure Container Apps from a source repository using the programming language of your choice.
+
+This tutorial is the first in a series of articles that walk you through how to use core capabilities within Azure Container Apps. The first step is to create a back end web API service that returns a static collection of music albums.
+
+> [!NOTE]
+> You can also build and deploy this app using the [az containerapp up](/cli/azure/containerapp#az_containerapp_up) by following the instructions in the [Quickstart: Build and deploy an app to Azure Container Apps from a repository](quickstart-code-to-cloud.md) article. The `az containerapp up` command is a fast and convenient way to build and deploy your app to Azure Container Apps using a single command. However, it doesn't provide the same level of customization for your container app.
+
+ The next tutorial in the series will build and deploy the front end web application to Azure Container Apps.
+
+The following screenshot shows the output from the album API deployed in this tutorial.
++
+## Prerequisites
+
+To complete this project, you need the following items:
++
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. <br><br>Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Sign up for [free](https://github.com/join). |
+| git | [Install git](https://git-scm.com/downloads) |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+++
+| Requirement | Instructions |
+|--|--|
+| Azure account | If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). You need the *Contributor* or *Owner* permission on the Azure subscription to proceed. Refer to [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current) for details. |
+| GitHub Account | Sign up for [free](https://github.com/join). |
+| git | [Install git](https://git-scm.com/downloads) |
+| Azure CLI | Install the [Azure CLI](/cli/azure/install-azure-cli).|
+| Docker Desktop | Docker provides installers that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). <br><br>From your command prompt, type `docker` to ensure Docker is running. |
+++
+Now that your Azure CLI setup is complete, you can define the environment variables that are used throughout this article.
+++
+## Prepare the GitHub repository
+
+Navigate to the repository for your preferred language and fork the repository.
+
+# [C#](#tab/csharp)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-csharp) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-csharp.git code-to-cloud
+```
+
+# [Go](#tab/go)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-go) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-go.git code-to-cloud
+```
+
+# [JavaScript](#tab/javascript)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-javascript) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-javascript.git code-to-cloud
+```
+
+# [Python](#tab/python)
+
+Select the **Fork** button at the top of the [album API repo](https://github.com/azure-samples/containerapps-albumapi-python) to fork the repo to your account.
+
+Now you can clone your fork of the sample repository.
+
+Use the following git command to clone your forked repo into the *code-to-cloud* folder:
+
+```git
+git clone https://github.com/$GITHUB_USERNAME/containerapps-albumapi-python.git code-to-cloud
+```
+++
+Next, change the directory into the root of the cloned repo.
+
+```console
+cd code-to-cloud/src
+```
+
+## Create an Azure resource group
+
+Create a resource group to organize the services related to your container app deployment.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group create \
+ --name $RESOURCE_GROUP \
+ --location "$LOCATION"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+New-AzResourceGroup -Location $Location -Name $ResourceGroup
+```
+++
+## Create an Azure Container Registry
+
+Next, create an Azure Container Registry (ACR) instance in your resource group to store the album API container image once it's built.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr create \
+ --resource-group $RESOURCE_GROUP \
+ --name $ACR_NAME \
+ --sku Basic \
+ --admin-enabled true
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$acr = New-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $ACRName -Sku Basic -EnableAdminUser
+```
++++
+## Build your application
+
+With [ACR tasks](../container-registry/container-registry-tasks-overview.md), you can build and push the docker image for the album API without installing Docker locally.
+
+### Build the container with ACR
+
+Run the following command to initiate the image build and push process using ACR. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr build --registry $ACR_NAME --image $API_NAME .
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+az acr build --registry $ACRName --image $APIName .
+```
+++
+Output from the `az acr build` command shows the upload progress of the source code to Azure and the details of the `docker build` and `docker push` operations.
+++
+## Build your application
+
+The following steps, demonstrate how to build your container image locally using Docker and push the image to the new container registry.
+
+### Build the container with Docker
+
+The following command builds a container image for the album API and tags it with the fully qualified name of the ACR login server. The `.` at the end of the command represents the docker build context, meaning this command should be run within the *src* folder where the Dockerfile is located.
+
+# [Bash](#tab/bash)
+
+```azurecli
+docker build --tag $ACR_NAME.azurecr.io/$API_NAME .
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+docker build --tag "$ACRName.azurecr.io/$APIName" .
+```
+++
+### Push the image to your container registry
+
+First, sign in to your Azure Container Registry.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az acr login --name $ACR_NAME
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+az acr login --name $ACRName
+```
+++
+Now, push the image to your registry.
+
+# [Bash](#tab/bash)
+
+```azurecli
+docker push $ACR_NAME.azurecr.io/$API_NAME
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+docker push "$ACRName.azurecr.io/$APIName"
+```
++++
+## Create a Container Apps environment
+
+The Azure Container Apps environment acts as a secure boundary around a group of container apps.
+
+Create the Container Apps environment using the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp env create \
+ --name $ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location "$LOCATION"
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'my-album-workspace'
+ ResourceGroupName = $ResourceGroup
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroup -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroup -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $Environment
+ ResourceGroupName = $ResourceGroup
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
+```
+++
+## Deploy your image to a container app
+
+Now that you have an environment created, you can create and deploy your container app with the `az containerapp create` command.
+
+Create and deploy your container app with the following command.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp create \
+ --name $API_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --environment $ENVIRONMENT \
+ --image $ACR_NAME.azurecr.io/$API_NAME \
+ --target-port 3500 \
+ --ingress 'external' \
+ --registry-server $ACR_NAME.azurecr.io \
+ --query properties.configuration.ingress.fqdn
+```
+
+* By setting `--ingress` to `external`, your container app is accessible from the public internet.
+
+* The `target-port` is set to `3500` to match the port that the container is listening to for requests.
+
+* Without a `query` property, the call to `az containerapp create` returns a JSON response that includes a rich set of details about the application. Adding a query parameter filters the output to just the app's fully qualified domain name (FQDN).
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+To create the container app, create template objects that you pass in as arguments to the `New-AzContainerApp` command.
+
+Create a template object to define your container image parameters.
+
+```azurepowershell
+$ImageParams = @{
+ Name = $APIName
+ Image = $ACRName + '.azurecr.io/' + $APIName + ':latest'
+}
+$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
+```
+
+You need run the following command to get your registry credentials.
+
+```azurepowershell
+$RegistryCredentials = Get-AzContainerRegistryCredential -Name $ACRName -ResourceGroupName $ResourceGroup
+```
+
+Create a registry credential object to define your registry information, and a secret object to define your registry password. The `PasswordSecretRef` refers to the `Name` in the secret object.
+
+```azurepowershell
+$RegistryArgs = @{
+ Server = $ACRName + '.azurecr.io'
+ PasswordSecretRef = 'registrysecret'
+ Username = $RegistryCredentials.Username
+}
+$RegistryObj = New-AzContainerAppRegistryCredentialObject @RegistryArgs
+
+$SecretObj = New-AzContainerAppSecretObject -Name 'registrysecret' -Value $RegistryCredentials.Password
+```
+
+Get your environment ID.
+
+```azurepowershell
+$EnvId = (Get-AzContainerAppManagedEnv -EnvName $Environment -ResourceGroup $ResourceGroup).Id
+```
+
+Create the container app.
+
+```azurepowershell
+$AppArgs = @{
+ Name = $APIName
+ Location = $Location
+ ResourceGroupName = $ResourceGroup
+ ManagedEnvironmentId = $EnvId
+ TemplateContainer = $TemplateObj
+ ConfigurationRegistry = $RegistryObj
+ ConfigurationSecret = $SecretObj
+ IngressTargetPort = 3500
+ IngressExternal = $true
+}
+$MyApp = New-AzContainerApp @AppArgs
+
+# show the app's fully qualified domain name (FQDN).
+$MyApp.IngressFqdn
+```
+
+* By setting `IngressExternal` to `external`, your container app is accessible from the public internet.
+* The `IngressTargetPort` parameter is set to `3500` to match the port that the container is listening to for requests.
+++
+## Verify deployment
+
+Copy the FQDN to a web browser. From your web browser, navigate to the `/albums` endpoint of the FQDN.
++
+## Clean up resources
+
+If you're not going to continue on to the [Communication between microservices](communicate-between-microservices.md) tutorial, you can remove the Azure resources created during this quickstart. Run the following command to delete the resource group along with all the resources created in this quickstart.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroup -Force
+```
+++
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+This quickstart is the entrypoint for a set of progressive tutorials that showcase the various features within Azure Container Apps. Continue on to learn how to enable communication from a web front end that calls the API you deployed in this article.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Communication between microservices](communicate-between-microservices.md)
container-apps Tutorial Deploy First App Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-deploy-first-app-cli.md
+
+ Title: 'Tutorial: Deploy your first container app'
+description: Deploy your first application to Azure Container Apps.
++++ Last updated : 03/21/2022++
+ms.devlang: azurecli
++
+# Tutorial: Deploy your first container app
+
+The Azure Container Apps service enables you to run microservices and containerized applications on a serverless platform. With Container Apps, you enjoy the benefits of running containers while you leave behind the concerns of manually configuring cloud infrastructure and complex container orchestrators.
+
+In this tutorial, you create a secure Container Apps environment and deploy your first container app.
+
+> [!NOTE]
+> You can also deploy this app using the [az containerapp up](/cli/azure/containerapp#az_containerapp_up) by following the instructions in the [Quickstart: Deploy your first container app with containerapp up](get-started.md) article. The `az containerapp up` command is a fast and convenient way to build and deploy your app to Azure Container Apps using a single command. However, it doesn't provide the same level of customization for your container app.
++
+## Prerequisites
+
+- An Azure account with an active subscription.
+ - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
++
+# [Bash](#tab/bash)
+
+To create the environment, run the following command:
+
+```azurecli
+az containerapp env create \
+ --name $CONTAINERAPPS_ENVIRONMENT \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+A Log Analytics workspace is required for the Container Apps environment. The following commands create a Log Analytics workspace and save the workspace ID and primary shared key to variables.
+
+```azurepowershell
+$WorkspaceArgs = @{
+ Name = 'myworkspace'
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ PublicNetworkAccessForIngestion = 'Enabled'
+ PublicNetworkAccessForQuery = 'Enabled'
+}
+New-AzOperationalInsightsWorkspace @WorkspaceArgs
+$WorkspaceId = (Get-AzOperationalInsightsWorkspace -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).CustomerId
+$WorkspaceSharedKey = (Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName $ResourceGroupName -Name $WorkspaceArgs.Name).PrimarySharedKey
+```
+
+To create the environment, run the following command:
+
+```azurepowershell
+$EnvArgs = @{
+ EnvName = $ContainerAppsEnvironment
+ ResourceGroupName = $ResourceGroupName
+ Location = $Location
+ AppLogConfigurationDestination = 'log-analytics'
+ LogAnalyticConfigurationCustomerId = $WorkspaceId
+ LogAnalyticConfigurationSharedKey = $WorkspaceSharedKey
+}
+
+New-AzContainerAppManagedEnv @EnvArgs
+```
+++
+## Create a container app
+
+Now that you have an environment created, you can deploy your first container app. With the `containerapp create` command, deploy a container image to Azure Container Apps.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az containerapp create \
+ --name my-container-app \
+ --resource-group $RESOURCE_GROUP \
+ --environment $CONTAINERAPPS_ENVIRONMENT \
+ --image mcr.microsoft.com/azuredocs/containerapps-helloworld:latest \
+ --target-port 80 \
+ --ingress 'external' \
+ --query properties.configuration.ingress.fqdn
+```
+
+> [!NOTE]
+> Make sure the value for the `--image` parameter is in lower case.
+
+By setting `--ingress` to `external`, you make the container app available to public requests.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$ImageParams = @{
+ Name = 'my-container-app'
+ Image = 'mcr.microsoft.com/azuredocs/containerapps-helloworld:latest'
+}
+$TemplateObj = New-AzContainerAppTemplateObject @ImageParams
+$EnvId = (Get-AzContainerAppManagedEnv -EnvName $ContainerAppsEnvironment -ResourceGroupName $ResourceGroupName).Id
+
+$AppArgs = @{
+ Name = 'my-container-app'
+ Location = $Location
+ ResourceGroupName = $ResourceGroupName
+ ManagedEnvironmentId = $EnvId
+ IdentityType = 'SystemAssigned'
+ TemplateContainer = $TemplateObj
+ IngressTargetPort = 80
+ IngressExternal = $true
+
+}
+New-AzContainerApp @AppArgs
+```
+
+> [!NOTE]
+> Make sure the value for the `Image` parameter is in lower case.
+
+By setting `IngressExternal` to `$true`, you make the container app available to public requests.
+++
+## Verify deployment
+
+# [Bash](#tab/bash)
+
+The `create` command returns the fully qualified domain name for the container app. Copy this location to a web browser.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Get the fully qualified domain name for the container app.
+
+```azurepowershell
+(Get-AzContainerApp -Name $AppArgs.Name -ResourceGroupName $ResourceGroupName).IngressFqdn
+```
+
+Copy this location to a web browser.
+++
+ The following message is displayed when the container app is deployed:
++
+## Clean up resources
+
+If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this tutorial.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+
+# [Bash](#tab/bash)
+
+```azurecli
+az group delete --name $RESOURCE_GROUP
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name $ResourceGroupName -Force
+```
+++
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Communication between microservices](communicate-between-microservices.md)
container-registry Container Registry Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-skus.md
Azure Container Registry is available in multiple service tiers (also known as S
The Basic, Standard, and Premium tiers all provide the same programmatic capabilities. They also all benefit from [image storage][container-registry-storage] managed entirely by Azure. Choosing a higher-level tier provides more performance and scale. With multiple service tiers, you can get started with Basic, then convert to Standard and Premium as your registry usage increases.
+For example :
+
+- If you purchase a Basic tier registry, it includes a storage of 10 GB. The price you pay here is $0.167 per day. Prices are calculated based on US dollars.
+- If you have a Basic tier registry and use 25 GB storage, you are paying $0.003/day*15 = $0.045 per day for the additional 15 GB.
+- So, the pricing for the Basic ACR with 25 GB storage is $0.167+$0.045= 0.212 USD per day with other related charges like networking, builds, etc, according to the [Pricing - Container Registry.](https://azure.microsoft.com/pricing/details/container-registry/)
++ ## Service tier features and limits The following table details the features and registry limits of the Basic, Standard, and Premium service tiers.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
The only operation possible when the encryption key has been revoked is account
### Assign a new managed-identity to the restored database account to continue accessing or recover access to the database account
+User-Assigned Identity is tied to a specified Cosmos DB account, whenever we assign a User-Assigned Identity to an account, ARM forwards the request to managed service identities to make this connection. Currently we carry over user-identity information from the source database account to the target database account during the restore (for both Continuous and Periodic backup restore) of CMK + User-Assigned Identity,
+
+Since the identity metadata is bound with the source database account and restore workflow doesn't re-scope identity to the target database account. This will cause the restored database accounts to be in a bad state, and become inaccessible after the source account is deleted and identityΓÇÖs renew time is expired.
+ Steps to assign a new managed-identity: 1. [Create a new user-assigned managed identity.](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) 2. [Grant KeyVault key access to this identity.](#choosing-the-preferred-security-model)
cost-management-billing Enable Tag Inheritance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-tag-inheritance.md
description: This article explains how to group costs using tag inheritance. Previously updated : 02/21/2023 Last updated : 03/30/2023
Tag inheritance is available for the following billing account types:
- Microsoft Customer Agreement (MCA) - Microsoft Partner Agreement (MPA) with Azure plan subscriptions
+Here's an example diagram showing how a tag is inherited.
++ ## Required permissions - For subscriptions:
Tag inheritance is available for the following billing account types:
You can enable the tag inheritance setting in the Azure portal. You apply the setting at the EA billing account, MCA billing profile, and subscription scopes. After the setting is enabled, all resource group and subscription tags are automatically applied to child resource usage records.
-To enable tag inheritance in the Azure portal:
+### To enable tag inheritance in the Azure portal for an EA billing account
1. In the Azure portal, search for **Cost Management** and select it (the green hexagon-shaped symbol, *not* Cost Management + Billing).
-2. Select a scope.
-3. In the left menu under **Settings**, select either **Manage billing account** or **Manage subscription**, depending on your scope.
-4. Under **Tag inheritance**, select **Edit**.
- :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance.png" alt-text="Screenshot showing the Edit option for Tag inheritance." :::
-5. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
- :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option." :::
+1. Select a scope.
+1. In the left menu under **Settings**, select **Manage billing account**.
+1. Under **Tag inheritance**, select **Edit**.
+ :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance.png" alt-text="Screenshot showing the Edit option for Tag inheritance for an EA billing account." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance.png" :::
+1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+ :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a billing account." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data.png":::
-Here's an example diagram showing how a tag is inherited.
+### To enable tag inheritance in the Azure portal for an MCA billing profile
+1. In the Azure portal, search for **Cost Management** and select it (the green hexagon-shaped symbol, *not* Cost Management + Billing).
+1. Select a scope.
+1. In the left menu under **Settings**, select **Manage billing profile**.
+1. Under **Tag inheritance**, select **Edit**.
+ :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance-billing-profile.png" alt-text="Screenshot showing the Edit option for Tag inheritance for an MCA billing profile." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance-billing-profile.png":::
+1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+ :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-billing-profile.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a billing profile." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-billing-profile.png":::
+
+### To enable tag inheritance in the Azure portal for a subscription
+
+1. In the Azure portal, search for **Cost Management** and select it (the green hexagon-shaped symbol, *not* Cost Management + Billing).
+1. Select a subscription scope.
+1. In the left menu under **Settings**, select **Manage subscription**.
+1. Under **Tag inheritance**, select **Edit**.
+ :::image type="content" source="./media/enable-tag-inheritance/edit-tag-inheritance-subscription.png" alt-text="Screenshot showing the Edit option for Tag inheritance for a subscription." lightbox="./media/enable-tag-inheritance/edit-tag-inheritance-subscription.png":::
+1. In the Tag inheritance (Preview) window, select **Automatically apply subscription and resource group tags to new data**.
+ :::image type="content" source="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-subscription.png" alt-text="Screenshot showing the Automatically apply subscription and resource group tags to new data option for a subscription." lightbox="./media/enable-tag-inheritance/automatically-apply-tags-new-usage-data-subscription.png":::
## Choose between resource and inherited tags
data-factory Connector Azure Cosmos Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-analytical-store.md
+
+ Title: Copy and transform data in Azure Cosmos DB analytical store
+
+description: Learn how to transform data in Azure Cosmos DB analytical store using Azure Data Factory and Azure Synapse Analytics.
++++++ Last updated : 03/31/2023++
+# Copy and transform data in Azure Cosmos DB analytical store by using Azure Data Factory
+
+> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
+> * [Current version](connector-azure-cosmos-analytical-store.md)
++
+This article outlines how to use Data Flow to transform data in Azure Cosmos DB analytical store. To learn more, read the introductory articles for [Azure Data Factory](introduction.md) and [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+>[!NOTE]
+>The Azure Cosmos DB analytical store connector supports [change data capture](concepts-change-data-capture.md) Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for Mongo DB, currently in public preview.
+
+## Supported capabilities
+
+This Azure Cosmos DB for NoSQL connector is supported for the following capabilities:
+
+| Supported capabilities|IR | Managed private endpoint|
+|| --| --|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
++
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
++
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read and write to collections in Azure Cosmos DB. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
+
+> [!Note]
+> The Azure Cosmos DB analytical store is found with the [Azure Cosmos DB for NoSQL](connector-azure-cosmos-db.md) dataset type.
++
+### Source transformation
+
+Settings specific to Azure Cosmos DB are available in the **Source Options** tab of the source transformation.
+
+**Include system columns:** If true, ```id```, ```_ts```, and other system columns will be included in your data flow metadata from Azure Cosmos DB. When updating collections, it is important to include this so that you can grab the existing row ID.
+
+**Page size:** The number of documents per page of the query result. Default is "-1" which uses the service dynamic page up to 1000.
+
+**Throughput:** Set an optional value for the number of RUs you'd like to apply to your Azure Cosmos DB collection for each execution of this data flow during the read operation. Minimum is 400.
+
+**Preferred regions:** Choose the preferred read regions for this process.
+
+**Change feed:** If true, you will get data from [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) which is a persistent record of changes to a container in the order they occur from last run automatically. When you set it true, do not set both **Infer drifted column types** and **Allow schema drift** as true at the same time. For more details, see [Azure Cosmos DB change feed](#azure-cosmos-db-change-feed).
+
+**Start from beginning:** If true, you will get initial load of full snapshot data in the first run, followed by capturing changed data in next runs. If false, the initial load will be skipped in the first run, followed by capturing changed data in next runs. The setting is aligned with the same setting name in [Azure Cosmos DB reference](https://github.com/Azure/azure-cosmosdb-spark/wiki/Configuration-references#reading-cosmosdb-collection-change-feed). For more details, see [Azure Cosmos DB change feed](#azure-cosmos-db-change-feed).
+
+### Sink transformation
+
+Settings specific to Azure Cosmos DB are available in the **Settings** tab of the sink transformation.
+
+**Update method:** Determines what operations are allowed on your database destination. The default is to only allow inserts. To update, upsert, or delete rows, an alter-row transformation is required to tag rows for those actions. For updates, upserts and deletes, a key column or columns must be set to determine which row to alter.
+
+**Collection action:** Determines whether to recreate the destination collection prior to writing.
+* None: No action will be done to the collection.
+* Recreate: The collection will get dropped and recreated
+
+**Batch size**: An integer that represents how many objects are being written to Azure Cosmos DB collection in each batch. Usually, starting with the default batch size is sufficient. To further tune this value, note:
+
+- Azure Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size * Batch Size". If you hit error saying "Request size is too large", reduce the batch size value.
+- The larger the batch size, the better throughput the service can achieve, while make sure you allocate enough RUs to empower your workload.
+
+**Partition key:** Enter a string that represents the partition key for your collection. Example: ```/movies/title```
+
+**Throughput:** Set an optional value for the number of RUs you'd like to apply to your Azure Cosmos DB collection for each execution of this data flow. Minimum is 400.
+
+**Write throughput budget:** An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total throughput allocated to the collection.
+
+## Azure Cosmos DB change feed
+
+Azure Data Factory can get data from [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) by enabling it in the mapping data flow source transformation. With this connector option, you can read change feeds and apply transformations before loading transformed data into destination datasets of your choice. You do not have to use Azure functions to read the change feed and then write custom transformations. You can use this option to move data from one container to another, prepare change feed driven material views for fit purpose or automate container backup or recovery based on change feed, and enable many more such use cases using visual drag and drop capability of Azure Data Factory.
+
+Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can be recorded by ADF for you to get changed data from the last run automatically. If you change your pipeline name or activity name, the checkpoint will be reset, which leads you to start from beginning or get changes from now in the next run.
+
+When you debug the pipeline, this feature works the same. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the pipeline result from debug run, you can go ahead to publish and trigger the pipeline. At the moment when you first time trigger your published pipeline, it automatically restarts from the beginning or gets changes from now on.
+
+In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changed data is always captured from the previous checkpoint of your selected pipeline run.
+
+In addition, Azure Cosmos DB analytical store now supports Change Data Capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for Mongo DB (public preview). Azure Cosmos DB analytical store allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store.
+
+## Next steps
+Get started with [change data capture in Azure Cosmos DB analytical store ](../cosmos-db/get-started-change-data-capture.md).
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
addDays('<timestamp>', <days>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*days*> | Yes | Integer | The positive or negative number of days to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
addHours('<timestamp>', <hours>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*hours*> | Yes | Integer | The positive or negative number of hours to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
addMinutes('<timestamp>', <minutes>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*minutes*> | Yes | Integer | The positive or negative number of minutes to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
addSeconds('<timestamp>', <seconds>, '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*seconds*> | Yes | Integer | The positive or negative number of seconds to add |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
addToTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*interval*> | Yes | Integer | The number of specified time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
convertFromUtc('<timestamp>', '<destinationTimeZone>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Time Zone Values](/windows-hardware/manufacture/desktop/default-time-zones#time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
convertTimeZone('<timestamp>', '<sourceTimeZone>', '<destinationTimeZone>', '<fo
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Time Zone Values](/windows-hardware/manufacture/desktop/default-time-zones#time-zones), but you might have to remove any punctuation from the time zone name. | | <*destinationTimeZone*> | Yes | String | The name for the target time zone. For time zone names, see [Microsoft Time Zone Values](/windows-hardware/manufacture/desktop/default-time-zones#time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
convertToUtc('<timestamp>', '<sourceTimeZone>', '<format>'?)
| | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp | | <*sourceTimeZone*> | Yes | String | The name for the source time zone. For time zone names, see [Microsoft Time Zone Values](/windows-hardware/manufacture/desktop/default-time-zones#time-zones), but you might have to remove any punctuation from the time zone name. |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
formatDateTime('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
getFutureTime(<interval>, <timeUnit>, <format>?)
| | -- | - | -- | | <*interval*> | Yes | Integer | The number of specified time units to add | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
getPastTime(<interval>, <timeUnit>, <format>?)
| | -- | - | -- | | <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
startOfDay('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
startOfHour('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
startOfMonth('<timestamp>', '<format>'?)
| Parameter | Required | Type | Description | | | -- | - | -- | | <*timestamp*> | Yes | String | The string that contains the timestamp |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
subtractFromTime('<timestamp>', <interval>, '<timeUnit>', '<format>'?)
| <*timestamp*> | Yes | String | The string that contains the timestamp | | <*interval*> | Yes | Integer | The number of specified time units to subtract | | <*timeUnit*> | Yes | String | The unit of time to use with *interval*: "Second", "Minute", "Hour", "Day", "Week", "Month", "Year" |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
Optionally, you can specify a different format with the <*format*> parameter.
| Parameter | Required | Type | Description | | | -- | - | -- |
-| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss:fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
+| <*format*> | No | String | Either a [single format specifier](/dotnet/standard/base-types/standard-date-and-time-format-strings) or a [custom format pattern](/dotnet/standard/base-types/custom-date-and-time-format-strings). The default format for the timestamp is ["o"](/dotnet/standard/base-types/standard-date-and-time-format-strings) (yyyy-MM-ddTHH:mm:ss.fffffffK), which complies with [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) and preserves time zone information. |
||||| | Return value | Type | Description |
data-manager-for-agri How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-private-links.md
By using Azure Private Link, you can connect to an Azure Data Manager for Agricu
This article describes how to create a private endpoint and approval process for Azure Data Manager for Agriculture Preview.
+## Prerequisites
+
+[Create a virtual network](../virtual-network/quick-create-portal.md) in the same subscription as the Azure Data Manager for Agriculture Preview instance. This virtual network will allow automatic approval of the Private Link endpoint.
+ ## How to set up a private endpoint Private Endpoints can be created using the Azure portal, PowerShell, or the Azure CLI:
databox-online Azure Stack Edge Gpu 2303 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2303-release-notes.md
+
+ Title: Azure Stack Edge 2303 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2303 release.
++
+
+++ Last updated : 03/31/2023+++
+# Azure Stack Edge 2303 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2303 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2303** release, which maps to software version **2.2.2257.1113**.
+
+## Supported update paths
+
+This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2026.5318).
+
+You can update to the latest version using the following update paths:
+
+| Current version | Update to | Then apply |
+| --| --| --|
+|2205 and earlier |2207 |2303
+|2207 and later |2303 |
+
+## What's new
+
+The 2303 release has the following new features and enhancements:
+
+- Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible.
+- You can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md).
+
+## Issues fixed in this release
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|Core Azure Stack Edge platform and Azure Kubernetes Service (AKS) on Azure Stack Edge |Critical bug fixes to improve workload availability during two-node Azure Stack Edge update of core Azure Stack Edge platform and AKS on Azure Stack Edge. |
+
+<!--## Known issues in this release
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Need known issues in 2303 |-->
+
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.|
+|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).|
+|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> |
+|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| |
+|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. ||
+|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| |
+|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. |
+|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). |
+|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
+|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. |
+|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution does not stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). |
+|**27.**|AKS on Azure Stack Edge |When you update your AKS on Azure Stack Edge deployment from a previous preview version to 2303 release, there is an additional nodepool rollout. |The update may take longer. |
+|**28.**|Azure portal |When the Arc deployment fails in this release, you will see a generic *NO PARAM* error code, as all the errors are not propagated in the portal. |There is no workaround for this behavior in this release. |
+|**29.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you will need to delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. |
+|**30.**|AKS on Azure Stack Edge |In this release, attaching the PVC takes a long time. As a result, some pods that use persistent volumes (PVs) come up slowly after the host reboots. |A workaround is to restart the nodepool VM by connecting via the Windows PowerShell interface of the device. |
+
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 01/31/2023 Last updated : 03/30/2023 # Update your Azure Stack Edge Pro GPU
The procedure described in this article was performed using a different version
## About latest updates
-The current update is Update 2301. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are:
+The current update is Update 2303. This update installs two updates, the device update followed by Kubernetes updates.
-- Device software version: Azure Stack Edge 2310 (2.2.2162.730)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2301 (2.2.2162.730)
+The associated versions for this update are:
+
+- Device software version: Azure Stack Edge 2303 (2.2.2257.1113)
+- Device Kubernetes version: Azure Stack Kubernetes Edge 2303 (2.2.2257.1113)
- Kubernetes server version: v1.24.6 - IoT Edge version: 0.1.0-beta15 - Azure Arc version: 1.8.14 - GPU driver version: 515.65.01 - CUDA version: 11.7
-For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2209-release-notes.md).
+For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2303-release-notes.md).
-**To apply 2301 update, your device must be running version 2207 or later.**
+**To apply 2303 update, your device must be running version 2207 or later.**
- If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.* -- You can update to 2207 from 2106 or later, and then install 2301.
+- You can update to 2207 from 2106 or later, and then install 2303.
### Update Azure Kubernetes service on Azure Stack Edge > [!IMPORTANT] > Use the following procedure only if you are an SAP or a PMEC customer.
-If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2301.
+If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2303.
-Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2301:
+Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2303:
-1. Update your device version to 2301.
+1. Update your device version to 2303.
1. Update your Kubernetes version to 2210.
-1. Update your Kubernetes version to 2301.
+1. Update your Kubernetes version to 2303.
-If you are running 2210, you can update both your device version and Kubernetes version directly to 2301.
+If you are running 2210, you can update both your device version and Kubernetes version directly to 2303.
-In Azure portal, the process will require two clicks, the first update gets your device version to 2301 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2301.
+In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2303.
-From the local UI, you will have to run each update separately: update the device version to 2301, then update Kubernetes version to 2210, and then update Kubernetes version to 2301.
+From the local UI, you will have to run each update separately: update the device version to 2303, then update Kubernetes version to 2210, and then update Kubernetes version to 2303.
### Updates for a single-node vs two-node
Do the following steps to download the update from the Microsoft Update Catalog.
2. In the search box of the Microsoft Update Catalog, enter the Knowledge Base (KB) number of the hotfix or terms for the update you want to download. For example, enter **Azure Stack Edge**, and then click **Search**.
- The update listing appears as **Azure Stack Edge Update 2301**.
+ The update listing appears as **Azure Stack Edge Update 2303**.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
This procedure takes around 20 minutes to complete. Perform the following steps
5. The update starts. After the device is successfully updated, it restarts. The local UI is not accessible in this duration.
-6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2301**.
+6. After the restart is complete, you are taken to the **Sign in** page. To verify that the device software has been updated, in the local web UI, go to **Maintenance** > **Software update**. For the current release, the displayed software version should be **Azure Stack Edge 2303**.
7. You will now update the Kubernetes software version. Select the remaining three Kubernetes files together (file with the *Kubernetes_Package.0.exe*, *Kubernetes_Package.1.exe*, and *Kubernetes_Package.2.exe* suffix) and repeat the above steps to apply update.
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
If you're having issues with Defender for DevOps these frequently asked question
- [I donΓÇÖt see the results for my ADO projects in Microsoft Defender for Cloud](#i-dont-see-the-results-for-my-ado-projects-in-microsoft-defender-for-cloud) - [Why is my Azure DevOps repository not refreshing to healthy?](#why-is-my-azure-devops-repository-not-refreshing-to-healthy) - [I donΓÇÖt see Recommendations for findings](#i-dont-see-recommendations-for-findings)-- [What information does Defender for DevOps store about me and my enterprise, and where is the data stored?](#what-information-does-defender-for-devops-store-about-me-and-my-enterprise-and-where-is-the-data-stored)
+- [What information does Defender for DevOps store about me and my enterprise, and where is the data stored and processed?](#what-information-does-defender-for-devops-store-about-me-and-my-enterprise-and-where-is-the-data-stored-and-processed)
- [Why are Delete source code and Write Code permissions required for Azure DevOps?](#why-are-delete-source-and-write-code-permissions-required-for-azure-devops) - [Is Exemptions capability available and tracked for app sec vulnerability management](#is-exemptions-capability-available-and-tracked-for-app-sec-vulnerability-management) - [Is continuous, automatic scanning available?](#is-continuous-automatic-scanning-available)
Ensure that you've onboarded the project with the connector and that your reposi
You must have more than a [stakeholder license](https://azure.microsoft.com/pricing/details/devops/azure-devops-services/) to the repos to onboard them, and you need to be at least the security reader on the subscription where the connector is created. You can confirm if you've onboarded the repositories by seeing them in the inventory list in Microsoft Defender for Cloud.
-### What information does Defender for DevOps store about me and my enterprise, and where is the data stored?
+### What information does Defender for DevOps store about me and my enterprise, and where is the data stored and processed?
-Data Defender for DevOps connects to your source code management system, for example, Azure DevOps, GitHub, to provide a central console for your DevOps resources and security posture. Defender for DevOps processes and stores the following information:
+Defender for DevOps connects to your source code management system, for example, Azure DevOps, GitHub, to provide a central console for your DevOps resources and security posture. Defender for DevOps processes and stores the following information:
- Metadata on your connected source code management systems and associated repositories. This data includes user, organizational, and authentication information. - Scan results for recommendations and assessments results and details.
-Data is stored within the region your connector is created in. You should consider which region to create your connector in, for any data residency requirements as you design and create your DevOps connector.
+Data is stored within the region your connector is created in and flows into [Microsoft Defender for Cloud](defender-for-cloud-introduction.md). You should consider which region to create your connector in, for any data residency requirements as you design and create your DevOps connector.
Defender for DevOps currently doesn't process or store your code, build, and audit logs.
You can learn more about [Microsoft Security DevOps](https://marketplace.visuals
## Next steps -- [Overview of Defender for DevOps](defender-for-devops-introduction.md)
+- [Overview of Defender for DevOps](defender-for-devops-introduction.md)
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
If your organization list is empty in the UI after you onboarded an Azure DevOps
For information on how to correct this issue, check out the [DevOps trouble shooting guide](troubleshooting-guide.md#troubleshoot-azure-devops-organization-connector-issues).
+### I have a large Azure DevOps organization with many repositories. Can I still onboard?
+
+Yes, there is no limit to how many Azure DevOps repositories you can onboard to Defender for DevOps.
+
+However, there are two main implications when onboarding large organizations ΓÇô speed and throttling. The speed of discovery for your DevOps repositories is determined by the number of projects for each connector (approximately 100 projects per hour). Throttling can happen because Azure DevOps API calls have a [global rate limit](https://learn.microsoft.com/azure/devops/integrate/concepts/rate-limits?view=azure-devops) and we limit the calls for project discovery to use a small portion of overall quota limits.
+
+Consider using an alternative Azure DevOps identity (i.e., an Organization Administrator account used as a service account) to avoid individual accounts from being throttled when onboarding large organizations. Below are some scenarios of when to use an alternate identity to onboard a Defender for DevOps connector:
+- Large number of Azure DevOps Organizations and Projects (~500 Projects or more).
+- Large number of concurrent builds which peak during work hours.
+- Authorized user is a [Power Platform](https://learn.microsoft.com/power-platform/) user making additional Azure DevOps API calls, using up the global rate limit quotas.
+
+Once you have onboarded the Azure DevOps repositories using this account and [configured and ran the Microsoft Security DevOps Azure DevOps extension](https://learn.microsoft.com/azure/defender-for-cloud/azure-devops-extension) in your CI/CD pipeline, then the scanning results will appear near instantaneously in Microsoft Defender for Cloud.
+ ## Next steps Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
digital-twins Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-security.md
description: Learn about Azure Digital Twins security best practices. Previously updated : 02/02/2023 Last updated : 03/31/2023
You can use either of these managed identity types to authenticate to a [custom-
For instructions on how to enable a managed identity for an Azure Digital Twins endpoint that can be used to route events, see [Endpoint options: Identity-based authentication](how-to-create-endpoints.md#endpoint-options-identity-based-authentication).
+### Using trusted Microsoft service for routing events to Event Hubs and Service Bus endpoints
+
+Azure Digital Twins can connect to Event Hubs and Service Bus endpoints for sending event data, using those resources' public endpoints. However, if those resources are bound to a VNet, connectivity to the resources are blocked by default. As a result, this configuration prevents Azure Digital Twins from sending event data to your resources.
+
+To resolve this, enable connectivity from your Azure Digital Twins instance to your Event Hubs or Service Bus resources through the the *trusted Microsoft service* option (see [Trusted Microsoft services for Event Hubs](../event-hubs/event-hubs-ip-filtering.md#trusted-microsoft-services) and [Trusted Microsoft services for Service Bus](../service-bus-messaging/service-bus-service-endpoints.md#trusted-microsoft-services)).
+
+You'll need to complete the following steps to enable the trusted Microsoft service connection.
+
+1. Your Azure Digital Twins instance must use a **system-assigned managed identity**. This allows other services to find your instance as a trusted Microsoft service. For instructions to set up a system-managed identity on the instance, see [Enable managed identity for the instance](how-to-create-endpoints.md#1-enable-managed-identity-for-the-instance).
+1. Once a system-assigned managed identity is provisioned, grant permission for your instance's managed identity to access your Event Hubs or Service Bus endpoint (this feature is not supported in Event Grid). For instructions to assign the proper roles, see [Assign Azure roles to the identity](how-to-create-endpoints.md#2-assign-azure-roles-to-the-identity).
+1. For Event Hubs and Service Bus endpoints that have firewall configurations in place, make sure you enable the **Allow trusted Microsoft services to bypass this firewall** setting.
+ ## Private network access with Azure Private Link [Azure Private Link](../private-link/private-link-overview.md) is a service that enables you to access Azure resources (like [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Storage](../storage/common/storage-introduction.md), and [Azure Cosmos DB](../cosmos-db/introduction.md)) and Azure-hosted customer and partner services over a private endpoint in your [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md).
education-hub Custom Tenant Set Up Classroom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/custom-tenant-set-up-classroom.md
+
+ Title: How to create a custom Azure for Classroom Tenant and Billing Profile
+description: This article shows you how to make a custom tenant and billing profile for educators in your organization
++++ Last updated : 3/17/2023+++
+# Create a custom Tenant and Billing Profile for Microsoft for Teaching Paid
+
+This article is meant for IT Admins utilizing Azure for Classroom. When signing up for this offer, you should already have a tenant and billing profile created, but this article is meant to help walk you through how to create a custom tenant and billing profile and associate them with an educator.
+
+## Prerequisites
+
+- Be signed up for Azure for Classroom
+
+## Create a new tenant
+
+This section walks you through how to create a new tenant and associate it with your university tenant using multi-tenant
+
+1. Go to the Azure portal and search for "Azure Active Directory"
+2. Create a new tenant in the "Manage tenants" tab
+3. Fill in and Finalize Tenant information
+4. After the tenant has been created copy the Tenant ID of the new tenant
+
+## Associate new tenant with university tenant
+
+1. Go to "Cost Management" and click on "Access control (IAM)
+2. Click on "Associated billing tenants"
+3. Click "Add" and add the Tenant ID of the newly created tenant
+4. Check the box for Billing management
+1. Click "Add" to finalize the association between the newly created tenant and university tenant
+
+## Invite Educator to the newly created tenant
+
+This section walks through how to add an Educator to the newly created tenant.
+
+1. Switch tenants to the newly created tenant
+2. Go to "Users" in the new tenant
+3. Invite a user to this tenant
+1. Change the role to "Global administrator"
+1. Tell the Educator to accept the invitation to this tenant
+2. After the Educator has joined the tenant, go into the tenant properties and click Yes under the Access management for Azure resources.
+
+Now that you've created a custom Tenant, you can go into Education Hub and begin distributing credit to Educators to use in labs.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create an assignment and allocate credit](create-assignment-allocate-credit.md)
event-grid Advanced Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/advanced-filtering.md
- Title: Advanced filtering - Azure Event Grid IoT Edge | Microsoft Docs
-description: Advanced filtering in Event Grid on IoT Edge.
--- Previously updated : 02/15/2022---
-# Advanced filtering
-Event Grid allows specifying filters on any property in the json payload. These filters are modeled as set of `AND` conditions, with each outer condition having optional inner `OR` conditions. For each `AND` condition, you specify the following values:
-
-* `OperatorType` - The type of comparison.
-* `Key` - The json path to the property on which to apply the filter.
-* `Value` - The reference value against which the filter is run (or) `Values` - The set of reference values against which the filter is run.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-## JSON syntax
-
-The JSON syntax for an advanced filter is as follows:
-
-```json
-{
- "filter": {
- "advancedFilters": [{
- "operatorType": "NumberGreaterThanOrEquals",
- "key": "Data.Key1",
- "value": 5
- }, {
- "operatorType": "StringContains",
- "key": "Subject",
- "values": ["container1", "container2"]
- }
- ]
- }
-}
-```
-
-## Filtering on array values
-
-Event Grid doesn't support filtering on an array of values today. If an incoming event has an array value for the advanced filter's key, the matching operation fails. The incoming event ends up not matching with the event subscription.
-
-## AND-OR-NOT semantics
-
-Notice that in the json example given earlier, `AdvancedFilters` is an array. Think of each `AdvancedFilter` array element as an `AND` condition.
-
-For the operators that support multiple values (such as `NumberIn`, `NumberNotIn`, `StringIn`, etc.), each value is treated as an `OR` condition. So, a `StringBeginsWith("a", "b", "c")` will match any string value that starts with either `a` or `b` or `c`.
-
-> [!CAUTION]
-> The NOT operators - `NumberNotIn` and `StringNotIn` behave as AND conditions on each value given in the `Values` field.
->
-> Not doing so will make the filter an Accept-All filter and defeat the purpose of filtering.
-
-## Floating-point rounding behavior
-
-Event Grid uses the `decimal` .NET type to handle all numeric values. The number values specified in the event subscription JSON aren't subject to floating point rounding behavior.
-
-## Case sensitivity of string filters
-
-All string comparisons are case-insensitive. There's no way to change this behavior today.
-
-## Allowed advanced filter keys
-
-The `Key` property can either be a well-known top-level property, or be a json path with multiple dots, where each dot signifies stepping into a nested json object.
-
-Event Grid doesn't have any special meaning for the `$` character in the Key, unlike the JSONPath specification.
-
-### Event Grid schema
-
-For events in the Event Grid schema:
-
-* ID
-* Topic
-* Subject
-* EventType
-* DataVersion
-* Data.Prop1
-* Data.Prop*Prop2.Prop3.Prop4.Prop5
-
-### Custom event schema
-
-There's no restriction on the `Key` in custom event schema since Event Grid doesn't enforce any envelope schema on the payload.
-
-## Numeric single-value filter examples
-
-* NumberGreaterThan
-* NumberGreaterThanOrEquals
-* NumberLessThan
-* NumberLessThanOrEquals
-
-```json
-{
- "filter": {
- "advancedFilters": [
- {
- "operatorType": "NumberGreaterThan",
- "key": "Data.Key1",
- "value": 5
- },
- {
- "operatorType": "NumberGreaterThanOrEquals",
- "key": "Data.Key2",
- "value": *456
- },
- {
- "operatorType": "NumberLessThan",
- "key": "Data.P*P2.P3",
- "value": 1000
- },
- {
- "operatorType": "NumberLessThanOrEquals",
- "key": "Data.P*P2",
- "value": 999
- }
- ]
- }
-}
-```
-
-## Numeric range-value filter examples
-
-* NumberIn
-* NumberNotIn
-
-```json
-{
- "filter": {
- "advancedFilters": [
- {
- "operatorType": "NumberIn",
- "key": "Data.Key1",
- "values": [1, 10, 100]
- },
- {
- "operatorType": "NumberNotIn",
- "key": "Data.Key2",
- "values": [2, 3, 4.56]
- }
- ]
- }
-}
-```
-
-## String range-value filter examples
-
-* StringContains
-* StringBeginsWith
-* StringEndsWith
-* StringIn
-* StringNotIn
-
-```json
-{
- "filter": {
- "advancedFilters": [
- {
- "operatorType": "StringContains",
- "key": "Data.Key1",
- "values": ["microsoft", "azure"]
- },
- {
- "operatorType": "StringBeginsWith",
- "key": "Data.Key2",
- "values": ["event", "grid"]
- },
- {
- "operatorType": "StringEndsWith",
- "key": "Data.P3.P4",
- "values": ["jpg", "jpeg", "png"]
- },
- {
- "operatorType": "StringIn",
- "key": "RootKey",
- "values": ["exact", "string", "matches"]
- },
- {
- "operatorType": "StringNotIn",
- "key": "RootKey",
- "values": ["aws", "bridge"]
- }
- ]
- }
-}
-```
-
-## Boolean single-value filter examples
-
-* BoolEquals
-
-```json
-{
- "filter": {
- "advancedFilters": [
- {
- "operatorType": "BoolEquals",
- "key": "BoolKey1",
- "value": true
- },
- {
- "operatorType": "BoolEquals",
- "key": "BoolKey2",
- "value": false
- }
- ]
- }
-}
-```
event-grid Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/api.md
- Title: REST API - Azure Event Grid IoT Edge | Microsoft Docs
-description: REST API on Event Grid on IoT Edge.
----- Previously updated : 02/15/2022----
-# REST API
-This article describes the REST APIs of Azure Event Grid on IoT Edge
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-## Common API behavior
-
-### Base URL
-Event Grid on IoT Edge has the following APIs exposed over HTTP (port 5888) and HTTPS (port 4438).
-
-* Base URL for HTTP: http://eventgridmodule:5888
-* Base URL for HTTPS: https://eventgridmodule:4438
-
-### Request query string
-All API requests require the following query string parameter:
-
-`?api-version=2019-01-01-preview`
-
-### Request content type
-All API requests must have a **Content-Type**.
-
-In case of **EventGridSchema** or **CustomSchema**, the value of Content-Type can be one of the following values:
-
-`Content-Type: application/json`
-
-`Content-Type: application/json; charset=utf-8`
-
-In case of **CloudEventSchemaV1_0** in structured mode, the value of Content-Type can be one of the following values:
-
-`Content-Type: application/cloudevents+json`
-
-`Content-Type: application/cloudevents+json; charset=utf-8`
-
-`Content-Type: application/cloudevents-batch+json`
-
-`Content-Type: application/cloudevents-batch+json; charset=utf-8`
-
-In case of **CloudEventSchemaV1_0** in binary mode, refer to [documentation](https://github.com/cloudevents/spec/blob/main/cloudevents/bindings/http-protocol-binding.md) for details.
-
-### Error response
-All APIs return an error with the following payload:
-
-```json
-{
- "error":
- {
- "code": "<HTTP STATUS CODE>",
- "details":
- {
- "code": "<Detailed Error Code>",
- "message": "..."
- }
- }
-}
-```
-
-## Manage topics
-
-### Put topic (create / update)
-
-**Request**: ``` PUT /topics/<topic_name>?api-version=2019-01-01-preview ```
-
-**Payload**:
-
-```json
- {
- "name": "<topic_name>", // optional, inferred from URL. If specified must match URL topic_name
- "properties":
- {
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0" // optional
- }
- }
-```
-
-**Response**: HTTP 200
-
-**Payload**:
-
-```json
-{
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>",
- "name": "<topic_name>",
- "type": "Microsoft.EventGrid/topics",
- "properties":
- {
- "endpoint": "<get_request_base_url>/topics/<topic_name>/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0" // populated with EventGridSchema if not explicitly specified in PUT request
- }
-}
-```
-
-### Get topic
-
-**Request**: ``` GET /topics/<topic_name>?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200
-
-**Payload**:
-```json
-{
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>",
- "name": "<topic_name>",
- "type": "Microsoft.EventGrid/topics",
- "properties":
- {
- "endpoint": "<request_base_url>/topics/<topic_name>/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0"
- }
-}
-```
-
-### Get all topics
-
-**Request**: ``` GET /topics?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200
-
-**Payload**:
-```json
-[
- {
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>",
- "name": "<topic_name>",
- "type": "Microsoft.EventGrid/topics",
- "properties":
- {
- "endpoint": "<request_base_url>/topics/<topic_name>/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0"
- }
- },
- {
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>",
- "name": "<topic_name>",
- "type": "Microsoft.EventGrid/topics",
- "properties":
- {
- "endpoint": "<request_base_url>/topics/<topic_name>/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0"
- }
- }
-]
-```
-
-### Delete topic
-
-**Request**: ``` DELETE /topics/<topic_name>?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200, empty payload
-
-## Manage event subscriptions
-Samples in this section use `EndpointType=Webhook;`. The json samples for `EndpointType=EdgeHub / EndpointType=EventGrid` are in the next section.
-
-### Put event subscription (create / update)
-
-**Request**: ``` PUT /topics/<topic_name>/eventSubscriptions/<subscription_name>?api-version=2019-01-01-preview ```
-
-**Payload**:
-```json
-{
- "name": "<subscription_name>", // optional, inferred from URL. If specified must match URL subscription_name
- "properties":
- {
- "topicName": "<topic_name>", // optional, inferred from URL. If specified must match URL topic_name
- "eventDeliverySchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0", // optional
- "retryPolicy": //optional
- {
- "eventExpiryInMinutes": 120,
- "maxDeliveryAttempts": 50
- },
- "persistencePolicy": "true",
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<webhook_url>",
- "maxEventsPerBatch": 10, // optional
- "preferredBatchSizeInKilobytes": 1033 // optional
- }
- },
- "filter": // optional
- {
- "subjectBeginsWith": "...",
- "subjectEndsWith": "...",
- "isSubjectCaseSensitive": true|false,
- "includedEventTypes": ["...", "..."],
- "advancedFilters":
- [
- {
- "OperatorType": "BoolEquals",
- "Key": "...",
- "Value": "..."
- },
- {
- "OperatorType": "NumberLessThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberLessThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "NumberNotIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "StringIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringNotIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringBeginsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringEndsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringContains",
- "Key": "...",
- "Values": ["...", "...", "..."]
- }
- ]
- }
- }
-}
-```
-
-**Response**: HTTP 200
-
-**Payload**:
-
-```json
-{
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>/eventSubscriptions/<subscription_name>",
- "name": "<subscription_name>",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "properties":
- {
- "topicName": "<topic_name>",
- "eventDeliverySchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0", // populated with EventGridSchema if not explicitly specified in PUT request
- "retryPolicy": // only populated if specified in the PUT request
- {
- "eventExpiryInMinutes": 120,
- "maxDeliveryAttempts": 50
- },
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<webhook_url>",
- "maxEventsPerBatch": 10, // optional
- "preferredBatchSizeInKilobytes": 1033 // optional
- }
- },
- "filter": // only populated if specified in the PUT request
- {
- "subjectBeginsWith": "...",
- "subjectEndsWith": "...",
- "isSubjectCaseSensitive": true|false,
- "includedEventTypes": ["...", "..."],
- "advancedFilters":
- [
- {
- "OperatorType": "BoolEquals",
- "Key": "...",
- "Value": "..."
- },
- {
- "OperatorType": "NumberLessThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberLessThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "NumberNotIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "StringIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringNotIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringBeginsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringEndsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringContains",
- "Key": "...",
- "Values": ["...", "...", "..."]
- }
- ]
- }
- }
-}
-```
--
-### Get event subscription
-
-**Request**: ``` GET /topics/<topic_name>/eventSubscriptions/<subscription_name>?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200
-
-**Payload**:
-```json
-{
- "id": "/iotHubs/<iot_hub_name>/devices/<iot_edge_device_id>/modules/<eventgrid_module_name>/topics/<topic_name>/eventSubscriptions/<subscription_name>",
- "name": "<subscription_name>",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "properties":
- {
- "topicName": "<topic_name>",
- "eventDeliverySchema": "EventGridSchema | CustomEventSchema | CloudEventSchemaV1_0", // populated with EventGridSchema if not explicitly specified in PUT request
- "retryPolicy": // only populated if specified in the PUT request
- {
- "eventExpiryInMinutes": 120,
- "maxDeliveryAttempts": 50
- },
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<webhook_url>",
- "maxEventsPerBatch": 10, // optional
- "preferredBatchSizeInKilobytes": 1033 // optional
- }
- },
- "filter": // only populated if specified in the PUT request
- {
- "subjectBeginsWith": "...",
- "subjectEndsWith": "...",
- "isSubjectCaseSensitive": true|false,
- "includedEventTypes": ["...", "..."],
- "advancedFilters":
- [
- {
- "OperatorType": "BoolEquals",
- "Key": "...",
- "Value": "..."
- },
- {
- "OperatorType": "NumberLessThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThan",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberLessThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberGreaterThanOrEquals",
- "Key": "...",
- "Value": <number>
- },
- {
- "OperatorType": "NumberIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "NumberNotIn",
- "Key": "...",
- "Values": [<number>, <number>, <number>]
- },
- {
- "OperatorType": "StringIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringNotIn",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringBeginsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringEndsWith",
- "Key": "...",
- "Values": ["...", "...", "..."]
- },
- {
- "OperatorType": "StringContains",
- "Key": "...",
- "Values": ["...", "...", "..."]
- }
- ]
- }
- }
-}
-```
-
-### Get event subscriptions
-
-**Request**: ``` GET /topics/<topic_name>/eventSubscriptions?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200
-
-**Payload**:
-```json
-[
- {
- // same event-subscription json as that returned from Get-EventSubscription above
- },
- {
- },
- ...
-]
-```
-
-### Delete event subscription
-
-**Request**: ``` DELETE /topics/<topic_name>/eventSubscriptions/<subscription_name>?api-version=2019-01-01-preview ```
-
-**Response**: HTTP 200, no payload
--
-## Publish events API
-
-### Send batch of events (in Event Grid schema)
-
-**Request**: ``` POST /topics/<topic_name>/events?api-version=2019-01-01-preview ```
-
-```json
-[
- {
- "id": "<user-defined-event-id>",
- "topic": "<topic_name>",
- "subject": "",
- "eventType": "",
- "eventTime": ""
- "dataVersion": "",
- "metadataVersion": "1",
- "data":
- ...
- }
-]
-```
-
-**Response**: HTTP 200, empty payload
--
-**Payload field descriptions**
-- ```Id``` is mandatory. It can be any string value that's populated by the caller. Event Grid does NOT do any duplicate detection or enforce any semantics on this field.-- ```Topic``` is optional, but if specified must match the topic_name from the request URL-- ```Subject``` is mandatory, can be any string value-- ```EventType``` is mandatory, can be any string value-- ```EventTime``` is mandatory, it's not validated but should be a proper DateTime.-- ```DataVersion``` is mandatory-- ```MetadataVersion``` is optional, if specified it MUST be a string with the value ```"1"```-- ```Data``` is optional, and can be any JSON token (number, string, boolean, array, object)-
-### Send batch of events (in custom schema)
-
-**Request**: ``` POST /topics/<topic_name>/events?api-version=2019-01-01-preview ```
-
-```json
-[
- {
- ...
- }
-]
-```
-
-**Response**: HTTP 200, empty payload
--
-**Payload Restrictions**
-- MUST be an array of events.-- Each array entry MUST be a JSON object.-- No other constraints (other than payload size).-
-## Examples
-
-### Set up topic with EventGrid schema
-Sets up a topic to require events to be published in **eventgridschema**.
-
-```json
- {
- "name": "myeventgridtopic",
- "properties":
- {
- "inputSchema": "EventGridSchema"
- }
- }
-```
-
-### Set up topic with custom schema
-Sets up a topic to require events to be published in `customschema`.
-
-```json
- {
- "name": "mycustomschematopic",
- "properties":
- {
- "inputSchema": "CustomSchema"
- }
- }
-```
-
-### Set up topic with cloud event schema
-Sets up a topic to require events to be published in `cloudeventschema`.
-
-```json
- {
- "name": "mycloudeventschematopic",
- "properties":
- {
- "inputSchema": "CloudEventSchemaV1_0"
- }
- }
-```
-
-### Set up WebHook as destination, events to be delivered in eventgridschema
-Use this destination type to send events to any other module (that hosts an HTTP endpoint) or to any HTTP addressable endpoint on the network/internet.
-
-```json
-{
- "properties":
- {
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<webhook_url>",
- "eventDeliverySchema": "eventgridschema",
- }
- }
- }
-}
-```
-
-Constraints on the `endpointUrl` attribute:
-- It must be non-null.-- It must be an absolute URL.-- If outbound__webhook__httpsOnly is set to true in the EventGridModule settings, it must be HTTPS only.-- If outbound__webhook__httpsOnly set to false, it can be HTTP or HTTPS.-
-Constraints on the `eventDeliverySchema` property:
-- It must match the subscribing topic's input schema.-- It can be null. It defaults to the topic's input schema.-
-### Set up IoT Edge as destination
-
-Use this destination to send events to IoT Edge Hub and be subjected to edge hub's routing/filtering/forwarding subsystem.
-
-```json
-{
- "properties":
- {
- "destination":
- {
- "endpointType": "EdgeHub",
- "properties":
- {
- "outputName": "<eventgridmodule_output_port_name>"
- }
- }
- }
-}
-```
-
-### Set up Event Grid Cloud as destination
-
-Use this destination to send events to Event Grid in the cloud (Azure). You'll need to first set up a user topic in the cloud to which events should be sent to, before creating an event subscription on the edge.
-
-```json
-{
- "properties":
- {
- "destination":
- {
- "endpointType": "EventGrid",
- "properties":
- {
- "endpointUrl": "<eventgrid_user_topic_url>",
- "sasKey": "<user_topic_sas_key>",
- "topicName": "<new value to populate in forwarded EventGridEvent.Topic>" // if not specified, the Topic field on every event gets nulled out before being sent to Azure Event Grid
- }
- }
- }
-}
-```
-
-EndpointUrl:
-- It must be non-null.-- It must be an absolute URL.-- The path `/api/events` must be defined in the request URL path.-- It must have `api-version=2018-01-01` in the query string.-- If outbound__eventgrid__httpsOnly is set to true in the EventGridModule settings (true by default), it must be HTTPS only.-- If outbound__eventgrid__httpsOnly is set to false, it can be HTTP or HTTPS.-- If outbound__eventgrid__allowInvalidHostnames is set to false (false by default), it must target one of the following endpoints:
- - `eventgrid.azure.net`
- - `eventgrid.azure.us`
- - `eventgrid.azure.cn`
-
-SasKey:
-- Must be non-null.-
-TopicName:
-- If the Subscription.EventDeliverySchema is set to EventGridSchema, the value from this field is put into every event's Topic field before being forwarded to Event Grid in the cloud.-- If the Subscription.EventDeliverySchema is set to CustomEventSchema, this property is ignored and the custom event payload is forwarded exactly as it was received.-
-## Set up Event Hubs as a destination
-
-To publish to an Event Hub, set the `endpointType` to `eventHub` and provide:
-
-* connectionString: Connection string for the specific Event Hub you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. You can generate an entity specific connection string by navigating to the specific Event Hub you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventHub",
- "properties": {
- "connectionString": "<your-event-hub-connection-string>"
- }
- }
- }
- }
- ```
-
-## Set up Service Bus Queues as a destination
-
-To publish to a Service Bus Queue, set the `endpointType` to `serviceBusQueue` and provide:
-
-* connectionString: Connection string for the specific Service Bus Queue you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. Generate an entity specific connection string by navigating to the specific Service Bus Queue you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "serviceBusQueue",
- "properties": {
- "connectionString": "<your-service-bus-queue-connection-string>"
- }
- }
- }
- }
- ```
-
-## Set up Service Bus Topics as a destination
-
-To publish to a Service Bus Topic, set the `endpointType` to `serviceBusTopic` and provide:
-
-* connectionString: Connection string for the specific Service Bus Topic you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. Generate an entity specific connection string by navigating to the specific Service Bus Topic you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "serviceBusTopic",
- "properties": {
- "connectionString": "<your-service-bus-topic-connection-string>"
- }
- }
- }
- }
- ```
-
-## Set up Storage Queues as a destination
-
-To publish to a Storage Queue, set the `endpointType` to `storageQueue` and provide:
-
-* queueName: Name of the Storage Queue you're publishing to.
-* connectionString: Connection string for the Storage Account the Storage Queue is in.
-
- >[!NOTE]
- > Unlike Event Hubs, Service Bus Queues, and Service Bus Topics, the connection string used for Storage Queues is not entity specific. Instead, it must be the connection string for the Storage Account.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "storageQueue",
- "properties": {
- "queueName": "<your-storage-queue-name>",
- "connectionString": "<your-storage-account-connection-string>"
- }
- }
- }
- }
- ```
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/concepts.md
- Title: Concepts - Azure Event Grid IoT Edge | Microsoft Docs
-description: Concepts in Event Grid on IoT Edge.
---- Previously updated : 02/15/2022----
-# Event Grid concepts
-
-This article describes the main concepts in Azure Event Grid.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-## Events
-
-An event is the smallest amount of information that fully describes something that happened in the system. Every event has common information like: source of the event, time the event took place, and unique identifier. Every event also has specific information that is only relevant to the specific type of event. The support for an event of size up to 1 MB is currently in preview.
-
-For the properties that are included in an event, see [Azure Event Grid event schema](event-schemas.md).
-
-## Publishers
-
-A publisher is the user or organization that decides to send events to Event Grid. You can publish events from your own application.
-
-## Event sources
-
-An event source is where the event happens. Each event source is related to one or more event types. For example, Azure Storage is the event source for blob created events. Your application is the event source for custom events that you define. Event sources are responsible for sending events to Event Grid.
-
-## Topics
-
-The event grid topic provides an endpoint where the source sends events. The publisher creates the event grid topic, and decides whether an event source needs one topic or more than one topic. A topic is used for a collection of related events. To respond to certain types of events, subscribers decide which topics to subscribe to.
-
-When designing your application, you have the flexibility to decide on how many topics to create. For large solutions, create a custom topic for each category of related events. For example, consider an application that sends events related to modifying user accounts and processing orders. It's unlikely any event handler wants both categories of events. Create two custom topics and let event handlers subscribe to the one that interests them. For small solutions, you might prefer to send all events to a single topic. Event subscribers can filter for the event types they want.
-
-See [REST API documentation](api.md) on how to manage topics in Event Grid.
-
-## Event subscriptions
-
-A subscription tells Event Grid which events on a topic you're interested in receiving. When creating the subscription, you provide an endpoint for handling the event. You can filter the events that are sent to the endpoint.
-
-See [REST API documentation](api.md) on how to manage subscriptions in Event Grid.
-
-## Event handlers
-
-From an Event Grid perspective, an event handler is the place where the event is sent. The handler takes further action to process the event. Event Grid supports several handler types. You can use a supported Azure service or your own web hook as the handler. Depending on the type of handler, Event Grid follows different mechanisms to guarantee the delivery of the event. If the destination event handler is an HTTP web hook, the event is retried until the handler returns a status code of `200 ΓÇô OK`. For edge Hub, if the event is delivered without any exception, it is considered successful.
-
-## Security
-
-Event Grid provides security for subscribing to topics, and publishing topics. For more information, see [Event Grid security and authentication](security-authentication.md).
-
-## Event delivery
-
-If Event Grid can't confirm that an event has been received by the subscriber's endpoint, it redelivers the event. For more information, see [Event Grid message delivery and retry](delivery-retry.md).
-
-## Batching
-
-When using a custom topic, events must always be published in an array. For low throughput scenarios, the array will have only one value. For high volume use cases, we recommend that you batch several events together per publish to achieve higher efficiency. Batches can be up to 1 MB. Each event should still not be greater than 1 MB (preview).
event-grid Configure Api Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-api-protocol.md
- Title: Configure API protocols - Azure Event Grid IoT Edge | Microsoft Docs
-description: Learn about the possible protocol configurations of an Event Grid module.
---- Previously updated : 02/15/2022----
-# Configure Event Grid API protocols
-
-This guide gives examples of the possible protocol configurations of an Event Grid module. The Event Grid module exposes API for its management and runtime operations. The following table captures the protocols and ports.
-
-| Protocol | Port | Description |
-| - | | |
-| HTTP | 5888 | Turned off by default. Useful only during testing. Not suitable for production workloads.
-| HTTPS | 4438 | Default
-
-See [Security and authentication](security-authentication.md) guide for all the possible configurations.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-## Expose HTTPS to IoT Modules on the same edge network
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge"
- ]
-}
- ```
-
-## Enable HTTPS to other IoT modules and non-IoT workloads
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
-}
- ```
-
->[!NOTE]
-> The **PortBindings** section allows you to map internal ports to ports of the container host. This feature makes it possible to reach the Event Grid module from outside the IoT Edge container network, if the IoT edge device is reachable publicly.
-
-## Expose HTTP and HTTPS to IoT modules on the same edge network
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=enabled",
- "inbound__serverAuth__serverCert__source=IoTEdge"
- ]
-}
- ```
-
-## Enable HTTP and HTTPS to other IoT modules and non-IoT workloads
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=enabled",
- "inbound__serverAuth__serverCert__source=IoTEdge"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ],
- "5888/tcp": [
- {
- "HostPort": "5888"
- }
- ]
- }
- }
-}
- ```
-
->[!NOTE]
-> By default, every IoT Module is part of the IoT Edge runtime created by the bridge network. It enables different IoT modules on the same network to communicate with each other. **PortBindings** allows you to map a container internal port onto the host machine thereby allowing anyone to be able to access Event Grid module's port from outside.
-
->[!IMPORTANT]
-> While the ports can be made accessible outside the IoT Edge network, client authentication enforces who is actually allowed to make calls into the module.
event-grid Configure Client Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-client-auth.md
- Title: Configure client authentication of incoming calls - Azure Event Grid IoT Edge | Microsoft Docs
-description: Learn about the possible client authentication configurations for the Event Grid module.
---- Previously updated : 02/15/2022----
-# Configure client authentication of incoming calls
-
-This guide gives examples of the possible client authentication configurations for the Event Grid module. The Event Grid module supports two types of client authentication:
-
-* Shared access signature (SAS) key-based
-* Certificate-based
-
-See [Security and authentication](security-authentication.md) guide for all the possible configurations.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Enable certificate-based client authentication, no self-signed certificates
-
-```json
- {
- "Env": [
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=false"
- ]
-}
- ```
-
-## Enable certificate-based client authentication, allow self-signed certificates
-
-```json
- {
- "Env": [
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true"
- ]
-}
-```
-
->[!NOTE]
->Set the property **inbound__clientAuth__clientCert__allowUnknownCA** to **true** only in test environments as you might typically use self-signed certificates. For production workloads, we recommend that you set this property to **false** and certificates from a certificate authority (CA).
-
-## Enable certificate-based and sas-key based client authentication
-
-```json
- {
- "Env": [
- "inbound__clientAuth__sasKeys__enabled=true",
- "inbound__clientAuth__sasKeys__key1=<some-secret1-here>",
- "inbound__clientAuth__sasKeys__key2=<some-secret2-here>",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true"
- ]
-}
- ```
-
->[!NOTE]
->SAS key-based client authentication allows a non-IoT edge module to do management and runtime operations assuming of course the API ports are accessible outside the IoT Edge network.
event-grid Configure Identity Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-identity-auth.md
- Title: Configure identity - Azure Event Grid IoT Edge | Microsoft Docs
-description: Configure Event Grid module's identity
----- Previously updated : 02/15/2022---
-# Configure identity for the Event Grid module
-
-This article gives shows how to configure identity for Grid on Edge. By default, the Event Grid module presents its identity certificate as configured by the IoT security daemon. Event Grid on Edge presents its identity certificate with its outgoing calls when it delivers events. A subscriber can then validate it's the Event Grid module that sent the event before accepting.
-
-See [Security and authentication](security-authentication.md) guide for all the possible configurations.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Always present identity certificate
-Here's an example configuration for always presenting an identity certificate on outgoing calls.
-
-```json
- {
- "Env": [
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge"
- ]
-}
- ```
-
-## Don't present identity certificate
-Here's an example configuration for not presenting an identity certificate on outgoing calls.
-
-```json
- {
- "Env": [
- "outbound__clientAuth__clientCert__enabled=false"
- ]
-}
- ```
event-grid Configure Webhook Subscriber Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure-webhook-subscriber-auth.md
- Title: Configure webhook subscriber authentication - Azure Event Grid IoT Edge | Microsoft Docs
-description: Configure webhook subscriber authentication
----- Previously updated : 02/15/2022---
-# Configure webhook subscriber authentication
-
-This guide gives examples of the possible webhook subscriber configurations for an Event Grid module. By default, only HTTPS endpoints are accepted for webhook subscribers. The Event Grid module will reject if the subscriber presents a self-signed certificate.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Allow only HTTPS subscriber
-
-```json
- {
- "Env": [
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=false"
- ]
-}
- ```
-
-## Allow HTTPS subscriber with self-signed certificate
-
-```json
- {
- "Env": [
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ]
-}
- ```
-
->[!NOTE]
->Set the property `outbound__webhook__allowUnknownCA` to `true` only in test environments as you might typically use self-signed certificates. For production workloads we recommend them to be set to **false**.
-
-## Allow HTTPS subscriber but skip certificate validation
-
-```json
- {
- "Env": [
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=true",
- "outbound__webhook__allowUnknownCA=false"
- ]
-}
- ```
-
->[!NOTE]
->Set the property `outbound__webhook__skipServerCertValidation` to `true` only in test environments as you might not be presenting a certificate that needs to be authenticated. For production workloads we recommend them to be set to **false**
-
-## Allow both HTTP and HTTPS with self-signed certificates
-
-```json
- {
- "Env": [
- "outbound__webhook__httpsOnly=false",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ]
-}
- ```
-
->[!NOTE]
->Set the property `outbound__webhook__httpsOnly` to `false` only in test environments as you might want to bring up a HTTP subscriber first. For production workloads we recommend them to be set to **true**
event-grid Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/configure.md
- Title: Configuration - Azure Event Grid IoT Edge | Microsoft Docs
-description: Configuration in Event Grid on IoT Edge.
----- Previously updated : 02/15/2022---
-# Event Grid Configuration
-
-Event Grid provides many configurations that can be modified per environment. The following section is a reference to all the available options and their defaults.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## TLS configuration
-
-To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples of its usage can be found in [this article](configure-api-protocol.md).
-
-| Property Name | Description |
-| - | |
-|`inbound__serverAuth__tlsPolicy`| TLS Policy of the Event Grid module. Default value is HTTPS only.
-|`inbound__serverAuth__serverCert__source`| Source of server certificate used by the Event Grid Module for its TLS configuration. Default value is IoT Edge.
-
-## Incoming client authentication
-
-To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples can be found in [this article](configure-client-auth.md).
-
-| Property Name | Description |
-| - | |
-|`inbound__clientAuth__clientCert__enabled`| To turn on/off certificate-based client authentication. Default value is true.
-|`inbound__clientAuth__clientCert__source`| Source for validating client certificates. Default value is IoT Edge.
-|`inbound__clientAuth__clientCert__allowUnknownCA`| Policy to allow a self-signed client certificate. Default value is true.
-|`inbound__clientAuth__sasKeys__enabled`| To turn on/off SAS key based client authentication. Default value is off.
-|`inbound__clientAuth__sasKeys__key1`| One of the values to validate incoming requests.
-|`inbound__clientAuth__sasKeys__key2`| Optional second value to validate incoming requests.
-
-## Outgoing client authentication
-To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples can be found in [this article](configure-identity-auth.md).
-
-| Property Name | Description |
-| - | |
-|`outbound__clientAuth__clientCert__enabled`| To turn on/off attaching an identity certificate for outgoing requests. Default value is true.
-|`outbound__clientAuth__clientCert__source`| Source for retrieving Event Grid module's outgoing certificate. Default value is IoT Edge.
-
-## Webhook event handlers
-
-To learn about client authentication in general, see [Security and Authentication](security-authentication.md). Examples can be found in [this article](configure-webhook-subscriber-auth.md).
-
-| Property Name | Description |
-| - | |
-|`outbound__webhook__httpsOnly`| Policy to control whether only HTTPS subscribers will be allowed. Default value is true (only HTTPS).
-|`outbound__webhook__skipServerCertValidation`| Flag to control whether to validate the subscriber's certificate. Default value is true.
-|`outbound__webhook__allowUnknownCA`| Policy to control whether a self-signed certificate can be presented by a subscriber. Default value is true.
-
-## Delivery and retry
-
-To learn about this feature in general, see [Delivery and Retry](delivery-retry.md).
-
-| Property Name | Description |
-| - | |
-| `broker__defaultMaxDeliveryAttempts` | Maximum number of attempts to deliver an event. Default value is 30.
-| `broker__defaultEventTimeToLiveInSeconds` | Time-to-live (TTL) in seconds after which an event will be dropped if not delivered. Default value is **7200** seconds
-
-## Output batching
-
-To learn about this feature in general, see [Delivery and Output batching](delivery-output-batching.md).
-
-| Property Name | Description |
-| - | |
-| `api__deliveryPolicyLimits__maxBatchSizeInBytes` | Maximum value allowed for the `ApproxBatchSizeInBytes` knob. Default value is `1_058_576`.
-| `api__deliveryPolicyLimits__maxEventsPerBatch` | Maximum value allowed for the `MaxEventsPerBatch` knob. Default value is `50`.
-| `broker__defaultMaxBatchSizeInBytes` | Maximum delivery request size when only `MaxEventsPerBatch` is specified. Default value is `1_058_576`.
-| `broker__defaultMaxEventsPerBatch` | Maximum number of events to add to a batch when only `MaxBatchSizeInBytes` is specified. Default value is `10`.
-
-## Metrics
-
-To learn about using metrics with Event Grid on IoT Edge, see [monitor topics and subscriptions](monitor-topics-subscriptions.md)
-
-| Property Name | Description |
-| - | |
-| `metrics__reporterType` | Reporter type for metrics endpoint. Default is `none` and disables metrics. Setting to `prometheus` enables metrics in the Prometheus exposition format.
event-grid Delivery Output Batching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/delivery-output-batching.md
- Title: Output batching in Azure Event Grid IoT Edge | Microsoft Docs
-description: Output batching in Event Grid on IoT Edge.
--- Previously updated : 02/15/2022---
-# Output batching
-
-Event Grid has support to deliver more than one event in a single delivery request. This feature makes it possible to increase the overall delivery throughput without paying the HTTP per-request overheads. Batching is turned off by default and can be turned on per-subscription.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-> [!WARNING]
-> The maximum allowed duration to process each delivery request does not change, even though the subscriber code potentially has to do more work per batched request. Delivery timeout defaults to 60 seconds.
-
-## Batching policy
-
-Event Grid's batching behavior can be customized per subscriber, by tweaking the following two settings:
-
-* Maximum events per batch
-
- This setting sets an upper limit on the number of events that can be added to a batched delivery request.
-
-* Preferred Batch Size In Kilobytes
-
- This knob is used to further control the max number of kilobytes that can be sent over per delivery request
-
-## Batching behavior
-
-* All or none
-
- Event Grid operates with all-or-none semantics. It doesn't support partial success of a batch delivery. Subscribers should be careful to only ask for as many events per batch as they can reasonably handle in 60 seconds.
-
-* Optimistic batching
-
- The batching policy settings aren't strict bounds on the batching behavior, and are respected on a best-effort basis. At low event rates, you'll often observe the batch size being less than the requested maximum events per batch.
-
-* Default is set to OFF
-
- By default, Event Grid only adds one event to each delivery request. The way to turn on batching is to set either one of the settings mentioned earlier in the article in the event subscription JSON.
-
-* Default values
-
- It isn't necessary to specify both the settings (Maximum events per batch and Approximate batch size in kilo bytes) when creating an event subscription. If only one setting is set, Event Grid uses (configurable) default values. See the following sections for the default values, and how to override them.
-
-## Turn on output batching
-
-```json
-{
- "properties":
- {
- "destination":
- {
- "endpointType": "WebHook",
- "properties":
- {
- "endpointUrl": "<your_webhook_url>",
- "maxEventsPerBatch": 10,
- "preferredBatchSizeInKilobytes": 64
- }
- },
- }
-}
-```
-
-## Configuring maximum allowed values
-
-The following deployment time settings control the maximum value allowed when creating an event subscription.
-
-| Property Name | Description |
-| - | -- |
-| `api__deliveryPolicyLimits__maxpreferredBatchSizeInKilobytes` | Maximum value allowed for the `PreferredBatchSizeInKilobytes` knob. Default `1033`.
-| `api__deliveryPolicyLimits__maxEventsPerBatch` | Maximum value allowed for the `MaxEventsPerBatch` knob. Default `50`.
-
-## Configuring runtime default values
-
-The following deployment time settings control the runtime default value of each knob when it isn't specified in the Event Subscription. To reiterate, at least one knob must be set on the Event Subscription to turn on batching behavior.
-
-| Property Name | Description |
-| - | -- |
-| `broker__defaultMaxBatchSizeInBytes` | Maximum delivery request size when only `MaxEventsPerBatch` is specified. Default `1_058_576`.
-| `broker__defaultMaxEventsPerBatch` | Maximum number of events to add to a batch when only `MaxBatchSizeInBytes` is specified. Default `10`.
event-grid Delivery Retry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/delivery-retry.md
- Title: Delivery and retry - Azure Event Grid IoT Edge | Microsoft Docs
-description: Delivery and retry in Event Grid on IoT Edge.
----- Previously updated : 02/15/2022---
-# Delivery and retry
-
-Event Grid provides durable delivery. It tries to deliver each message at least once for each matching subscription immediately. If a subscriber's endpoint doesn't acknowledge receipt of an event or if there is a failure, Event Grid retries delivery based on a fixed **retry schedule** and **retry policy**. By default, the Event Grid module delivers one event at a time to the subscriber. The payload is however an array with a single event. You can have the module deliver more than one event at a time by enabling the output batching feature. For details about this feature, see [output batching](delivery-output-batching.md).
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-> [!IMPORTANT]
->There is no persistence support for event data. This means redeploying or restart of the Event Grid module will cause you to lose any events that aren't yet delivered.
-
-## Retry schedule
-
-Event Grid waits up to 60 seconds for a response after delivering a message. If the subscriber's endpoint doesn't ACK the response, then the message will be enqueued in one of our back off queues for subsequent retries.
-
-There are two pre-configured back off queues that determine the schedule on which a retry will be attempted. They are:
-
-| Schedule | Description |
-| | |
-| 1 minute | Messages that end up here are attempted every minute.
-| 10 minutes | Messages that end up here are attempted every 10th minute.
-
-### How it works
-
-1. Message arrives into the Event Grid module. Attempt is made to deliver it immediately.
-1. If delivery fails, then the message is enqueued into 1-minute queue and retried after a minute.
-1. If delivery continues to fail, then the message is enqueued into 10-minute queue and retried every 10 minutes.
-1. Deliveries are attempted until successful or retry policy limits are reached.
-
-## Retry policy limits
-
-There are two configurations that determine retry policy. They are:
-
-* Maximum number of attempts
-* Event time-to-live (TTL)
-
-An event will be dropped if either of the limits of the retry policy is reached. The retry schedule itself was described in the Retry Schedule section. Configuration of these limits can be done either for all subscribers or per subscription basis. The following section describes each one is further detail.
-
-## Configuring defaults for all subscribers
-
-There are two properties: `brokers__defaultMaxDeliveryAttempts` and `broker__defaultEventTimeToLiveInSeconds` that can be configured as part of the Event Grid deployment, which controls retry policy defaults for all subscribers.
-
-| Property Name | Description |
-| - | |
-| `broker__defaultMaxDeliveryAttempts` | Maximum number of attempts to deliver an event. Default value: 30.
-| `broker__defaultEventTimeToLiveInSeconds` | Event TTL in seconds after which an event will be dropped if not delivered. Default value: **7200** seconds
-
-## Configuring defaults per subscriber
-
-You can also specify retry policy limits on a per subscription basis.
-See our [API documentation](api.md) for information on how to do configure defaults per subscriber. Subscription level defaults override the module level configurations.
-
-## Examples
-
-The following example sets up retry policy in the Event Grid module with maxNumberOfAttempts = 3 and Event TTL of 30 minutes
-
-```json
-{
- "Env": [
- "broker__defaultMaxDeliveryAttempts=3",
- "broker__defaultEventTimeToLiveInSeconds=1800"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
-}
-```
-
-The following example sets up a Web hook subscription with maxNumberOfAttempts = 3 and Event TTL of 30 minutes
-
-```json
-{
- "properties": {
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your_webhook_url>",
- "eventDeliverySchema": "eventgridschema"
- }
- },
- "retryPolicy": {
- "eventExpiryInMinutes": 30,
- "maxDeliveryAttempts": 3
- }
- }
-}
-```
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/event-handlers.md
- Title: Event Handlers and destinations - Azure Event Grid IoT Edge | Microsoft Docs
-description: Event Handlers and destinations in Event Grid on Edge
- Previously updated : 02/15/2022---
-# Event Handlers and destinations in Event Grid on Edge
-
-An event handler is the place where the event for further action or to process the event. With the Event Grid on Edge module, the event handler can be on the same edge device, another device, or in the cloud. You may can use any WebHook to handle events, or send events to one of the native handlers like Azure Event Grid.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-This article provides information on how to configure each.
-
-## WebHook
-
-To publish to a WebHook endpoint, set the `endpointType` to `WebHook` and provide:
-
-* endpointUrl: The WebHook endpoint URL
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-webhook-endpoint>"
- }
- }
- }
- }
- ```
-
-## Azure Event Grid
-
-To publish to an Azure Event Grid cloud endpoint, set the `endpointType` to `eventGrid` and provide:
-
-* endpointUrl: Event Grid Topic URL in the cloud
-* sasKey: Event Grid Topic's SAS key
-* topicName: Name to stamp all outgoing events to Event Grid. Topic name is useful when posting to an Event Grid Domain topic.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventGrid",
- "properties": {
- "endpointUrl": "<your-event-grid-cloud-topic-endpoint-url>?api-version=2018-01-01",
- "sasKey": "<your-event-grid-topic-saskey>",
- "topicName": null
- }
- }
- }
- }
- ```
-
-## IoT Edge Hub
-
-To publish to an Edge Hub module, set the `endpointType` to `edgeHub` and provide:
-
-* outputName: The output on which the Event Grid module will route events that match this subscription to edgeHub. For example, events that match the below subscription will be written to /messages/modules/eventgridmodule/outputs/sampleSub4.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "edgeHub",
- "properties": {
- "outputName": "sampleSub4"
- }
- }
- }
- }
- ```
-
-## Event Hubs
-
-To publish to an Event Hub, set the `endpointType` to `eventHub` and provide:
-
-* connectionString: Connection string for the specific Event Hub you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. You can generate an entity specific connection string by navigating to the specific Event Hub you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventHub",
- "properties": {
- "connectionString": "<your-event-hub-connection-string>"
- }
- }
- }
- }
- ```
-
-## Service Bus Queues
-
-To publish to a Service Bus Queue, set the `endpointType` to `serviceBusQueue` and provide:
-
-* connectionString: Connection string for the specific Service Bus Queue you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. Generate an entity specific connection string by navigating to the specific Service Bus Queue you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "serviceBusQueue",
- "properties": {
- "connectionString": "<your-service-bus-queue-connection-string>"
- }
- }
- }
- }
- ```
-
-## Service Bus Topics
-
-To publish to a Service Bus Topic, set the `endpointType` to `serviceBusTopic` and provide:
-
-* connectionString: Connection string for the specific Service Bus Topic you're targeting generated via a Shared Access Policy.
-
- >[!NOTE]
- > The connection string must be entity specific. Using a namespace connection string will not work. Generate an entity specific connection string by navigating to the specific Service Bus Topic you would like to publish to in the Azure Portal and clicking **Shared access policies** to generate a new entity specific connecection string.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "serviceBusTopic",
- "properties": {
- "connectionString": "<your-service-bus-topic-connection-string>"
- }
- }
- }
- }
- ```
-
-## Storage Queues
-
-To publish to a Storage Queue, set the `endpointType` to `storageQueue` and provide:
-
-* queueName: Name of the Storage Queue you're publishing to.
-* connectionString: Connection string for the Storage Account the Storage Queue is in.
-
- >[!NOTE]
- > Unline Event Hubs, Service Bus Queues, and Service Bus Topics, the connection string used for Storage Queues is not entity specific. Instead, it must but the connection string for the Storage Account.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "storageQueue",
- "properties": {
- "queueName": "<your-storage-queue-name>",
- "connectionString": "<your-storage-account-connection-string>"
- }
- }
- }
- }
- ```
event-grid Event Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/event-schemas.md
- Title: Event schemas ΓÇö Azure Event Grid IoT Edge | Microsoft Docs
-description: Event schemas in Event Grid on IoT Edge.
----- Previously updated : 02/15/2022---
-# Event schemas
-
-Event Grid module accepts and delivers events in JSON format. There are currently three schemas that are supported by Event Grid: -
-
-* **EventGridSchema**
-* **CustomSchema**
-* **CloudEventSchema**
-
-You can configure the schema that a publisher must conform to during topic creation. If unspecified, it defaults to **EventGridSchema**. Events that don't conform to the expected schema will be rejected.
-
-Subscribers can also configure the schema in which they want the events delivered. If unspecified, default is topic's schema.
-Currently subscriber delivery schema has to match its topic's input schema.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## EventGrid schema
-
-EventGrid schema consists of a set of required properties that a publishing entity must conform to. Each publisher has to populate the top-level fields.
-
-```json
-[
- {
- "topic": string,
- "subject": string,
- "id": string,
- "eventType": string,
- "eventTime": string,
- "data":{
- object-unique-to-each-publisher
- },
- "dataVersion": string,
- "metadataVersion": string
- }
-]
-```
-
-### EventGrid schema properties
-
-All events have the following top-level data:
-
-| Property | Type | Required | Description |
-| -- | - | -- |--
-| topic | string | No | Should match the topic on which it's published. Event Grid populates it with the name of the topic on which it's published if unspecified. |
-| subject | string | Yes | Publisher-defined path to the event subject. |
-| eventType | string | Yes | Event type for this event source, for example, BlobCreated. |
-| eventTime | string | Yes | The time the event is generated based on the provider's UTC time. |
-| id | string | No | Unique identifier for the event. |
-| data | object | No | Used to capture event data that's specific to the publishing entity. |
-| dataVersion | string | Yes | The schema version of the data object. The publisher defines the schema version. |
-| metadataVersion | string | No | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
-
-### Example ΓÇö EventGrid schema event
-
-```json
-[
- {
- "id": "1807",
- "eventType": "recordInserted",
- "subject": "myapp/vehicles/motorcycles",
- "eventTime": "2017-08-10T21:03:07+00:00",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- },
- "dataVersion": "1.0"
- }
-]
-```
-
-## CustomEvent schema
-
-In custom schema, there are no mandatory properties that are enforced like the EventGrid schema. Publishing entity can control the event schema entirely. It provides maximum flexibility and enables scenarios where you have an event-based system already in place and would like to reuse existing events and/or don't want to be tied down to a specific schema.
-
-### Custom schema properties
-
-No mandatory properties. It's up to the publishing entity to determine the payload.
-
-### Example ΓÇö Custom Schema Event
-
-```json
-[
- {
- "eventdata": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
-]
-```
-
-## CloudEvent schema
-
-In addition to the above schemas, Event Grid natively supports events in the [CloudEvents JSON schema](https://github.com/cloudevents/spec/blob/main/cloudevents/formats/json-format.md). CloudEvents is an open specification for describing event data. It simplifies interoperability by providing a common event schema for publishing, and consuming events. It is part of [CNCF](https://www.cncf.io/) and currently available version is 1.0-rc1.
-
-### CloudEvent schema properties
-
-Refer to [CloudEvents specification](https://github.com/cloudevents/spec/blob/main/cloudevents/formats/json-format.md#3-envelope) on the mandatory envelope properties.
-
-### Example ΓÇö cloud event
-```json
-[{
- "id": "1807",
- "type": "recordInserted",
- "source": "myapp/vehicles/motorcycles",
- "time": "2017-08-10T21:03:07+00:00",
- "datacontenttype": "application/json",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- },
- "dataVersion": "1.0",
- "specVersion": "1.0-rc1"
-}]
-```
event-grid Forward Events Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/forward-events-cloud.md
- Title: Forward edge events to Event Grid cloud - Azure Event Grid IoT Edge | Microsoft Docs
-description: Forward edge events to Event Grid cloud
----- Previously updated : 02/15/2022---
-# Tutorial: Forward events to Event Grid cloud
-
-This article walks through all the steps needed to forward edge events to Event Grid in the Azure cloud. You might want to do it for the following reasons:
-
-* React to edge events in the cloud.
-* Forward events to Event Grid in the cloud and use Azure Event Hubs or Azure Storage queues to buffer events before processing them in the cloud.
-
- To complete this tutorial, you need to have an understanding of Event Grid concepts on [edge](concepts.md) and [Azure](../concepts.md). For more destination types, see [event handlers](event-handlers.md).
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Prerequisites
-In order to complete this tutorial, you need:
-
-* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one.
-* **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one.
-
-## Create Event Grid topic and subscription in cloud
-
-Create an Event Grid topic and subscription in the cloud by following [this tutorial](../custom-event-quickstart-portal.md). Note down `topicURL`, `sasKey`, and `topicName` of the newly created topic that you use later in the tutorial.
-
-For example, if you created a topic named `testegcloudtopic` in West US, the values would look something like:
-
-* **TopicUrl**: `https://testegcloudtopic.westus2-1.eventgrid.azure.net/api/events`
-* **TopicName**: `testegcloudtopic`
-* **SasKey**: Available under **AccessKey** of your topic. Use **key1**.
-
-## Create Event Grid topic at the edge
-
-1. Create topic3.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "name": "sampleTopic3",
- "properties": {
- "inputschema": "eventGridSchema"
- }
- }
- ```
-1. Run the following command to create the topic. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @topic3.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3?api-version=2019-01-01-preview
- ```
-1. Run the following command to verify topic was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- [
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic3",
- "name": "sampleTopic3",
- "type": "Microsoft.EventGrid/topics",
- "properties": {
- "endpoint": "https://<edge-vm-ip>:4438/topics/sampleTopic3/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema"
- }
- }
- ]
- ```
-
-## Create Event Grid subscription at the edge
--
-1. Create subscription3.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventGrid",
- "properties": {
- "endpointUrl": "<your-event-grid-cloud-topic-endpoint-url>?api-version=2018-01-01",
- "sasKey": "<your-event-grid-topic-saskey>",
- "topicName": null
- }
- }
- }
- }
- ```
-
- >[!NOTE]
- > The **endpointUrl** specifies that the Event Grid topic URL in the cloud. The **sasKey** refers to Event Grid cloud topic's key. The value in **topicName** will be used to stamp all outgoing events to Event Grid. This can be useful when posting to an Event Grid domain topic. For more information about Event Grid domain topic, see [Event domains](../event-domains.md)
-
- For example,
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "eventGrid",
- "properties": {
- "endpointUrl": "https://testegcloudtopic.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01",
- "sasKey": "<your-event-grid-topic-saskey>",
- "topicName": null
- }
- }
- }
- }
- ```
-
-2. Run the following command to create the subscription. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @subscription3.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3/eventSubscriptions/sampleSubscription3?api-version=2019-01-01-preview
- ```
-
-3. Run the following command to verify subscription was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3/eventSubscriptions/sampleSubscription3?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic3/eventSubscriptions/sampleSubscription3",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "name": "sampleSubscription3",
- "properties": {
- "Topic": "sampleTopic3",
- "destination": {
- "endpointType": "eventGrid",
- "properties": {
- "endpointUrl": "https://testegcloudtopic.westus2-1.eventgrid.azure.net/api/events?api-version=2018-01-01",
- "sasKey": "<your-event-grid-topic-saskey>",
- "topicName": null
- }
- }
- }
- }
- ```
-
-## Publish an event at the edge
-
-1. Create event3.json with the following content. See [API documentation](api.md) for details about the payload.
-
- ```json
- [
- {
- "id": "eventId-egcloud-0",
- "eventType": "recordInserted",
- "subject": "myapp/vehicles/motorcycles",
- "eventTime": "2019-07-28T21:03:07+00:00",
- "dataVersion": "1.0",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
- ]
- ```
-
-1. Run the following command:
-
- ```sh
- curl -k -H "Content-Type: application/json" -X POST -g -d @event3.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3/events?api-version=2019-01-01-preview
- ```
-
-## Verify edge event in cloud
-
-For information on viewing events delivered by the cloud topic, see the [tutorial](../custom-event-quickstart-portal.md).
-
-## Cleanup resources
-
-* Run the following command to delete the topic and all its subscriptions
-
- ```sh
- curl -k -H "Content-Type: application/json" -X DELETE https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic3?api-version=2019-01-01-preview
- ```
-
-* Delete topic and subscriptions created in the cloud (Azure Event Grid) as well.
-
-## Next steps
-
-In this tutorial, you published an event on the edge and forwarded to Event Grid in the Azure cloud. Now that you know the basic steps to forward to Event Grid in cloud:
-
-* To troubleshoot issues with using Azure Event Grid on IoT Edge, see [Troubleshooting guide](troubleshoot.md).
-* Forward events to IoTHub by following this [tutorial](forward-events-iothub.md)
-* Forward events to Webhook in the cloud by following this [tutorial](pub-sub-events-webhook-cloud.md)
-* [Monitor topics and subscriptions on the edge](monitor-topics-subscriptions.md)
event-grid Forward Events Iothub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/forward-events-iothub.md
- Title: Forward Event Grid events to IoTHub - Azure Event Grid IoT Edge | Microsoft Docs
-description: Forward Event Grid events to IoTHub
----- Previously updated : 02/15/2022---
-# Tutorial: Forward events to IoTHub
-
-This article walks through all the steps needed to forward Event Grid events to other IoT Edge modules, IoTHub using routes. You might want to do it for the following reasons:
-
-* Continue to use any existing investments already in place with edgeHub's routing
-* Prefer to route all events from a device only via IoT Hub
--
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-To complete this tutorial, you need to understand the following concepts:
--- [Event Grid Concepts](concepts.md)-- [IoT Edge hub](../../iot-edge/module-composition.md) -
-## Prerequisites
-In order to complete this tutorial, you need:
-
-* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one.
-* **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one.
--
-## Create topic
-
-As a publisher of an event, you need to create an Event Grid topic. The topic refers to an end point where publishers can then send events to.
-
-1. Create topic4.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "name": "sampleTopic4",
- "properties": {
- "inputschema": "eventGridSchema"
- }
- }
- ```
-1. Run the following command to create the topic. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @topic4.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4?api-version=2019-01-01-preview
- ```
-
-1. Run the following command to verify topic was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- [
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic4",
- "name": "sampleTopic4",
- "type": "Microsoft.EventGrid/topics",
- "properties": {
- "endpoint": "https://<edge-vm-ip>:4438/topics/sampleTopic4/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema"
- }
- }
- ]
- ```
-
-## Create event subscription
-
-Subscribers can register for events published to a topic. To receive any event, they need to create an Event Grid subscription on a topic of interest.
--
-1. Create subscription4.json with the below content. Refer to our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "edgeHub",
- "properties": {
- "outputName": "sampleSub4"
- }
- }
- }
- }
- ```
-
- >[!NOTE]
- > The `endpointType` specifies that the subscriber is `edgeHub`. The `outputName` specifies the output on which the Event Grid module will route events that match this subscription to edgeHub. For example, events that match the above subscription will be written to `/messages/modules/eventgridmodule/outputs/sampleSub4`.
-2. Run the following command to create the subscription. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @subscription4.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4/eventSubscriptions/sampleSubscription4?api-version=2019-01-01-preview
- ```
-3. Run the following command to verify subscription was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4/eventSubscriptions/sampleSubscription4?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic4/eventSubscriptions/sampleSubscription4",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "name": "sampleSubscription4",
- "properties": {
- "Topic": "sampleTopic4",
- "destination": {
- "endpointType": "edgeHub",
- "properties": {
- "outputName": "sampleSub4"
- }
- }
- }
- }
- ```
-
-## Set up an edge hub route
-
-Update the edge hub's route to forward event subscription's events to be forwarded to IoTHub as follows:
-
-1. Sign in to the [Azure portal](https://portal.azure.com)
-1. Navigate to the **IoT Hub**.
-1. Select **IoT Edge** from the menu
-1. Select the ID of the target device from the list of devices.
-1. Select **Set Modules**.
-1. Select **Next** and to the routes section.
-1. In the routes, add a new route
-
- ```sh
- "fromEventGridToIoTHub":"FROM /messages/modules/eventgridmodule/outputs/sampleSub4 INTO $upstream"
- ```
-
- For example,
-
- ```json
- {
- "routes": {
- "fromEventGridToIoTHub": "FROM /messages/modules/eventgridmodule/outputs/sampleSub4 INTO $upstream"
- }
- }
- ```
-
- >[!NOTE]
- > The above route will forward any events matched for this subscription to be forwarded to the IoT hub. You can use the [Edge hub routing](../../iot-edge/module-composition.md) features to further filter, and route the Event Grid events to other IoT Edge modules.
-
-## Setup IoT Hub route
-
-See the [IoT Hub routing tutorial](../../iot-hub/tutorial-routing.md) to set up a route from the IoT hub so that you can view events forwarded from the Event Grid module. Use `true` for the query to keep the tutorial simple.
---
-## Publish an event
-
-1. Create event4.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- [
- {
- "id": "eventId-iothub-1",
- "eventType": "recordInserted",
- "subject": "myapp/vehicles/motorcycles",
- "eventTime": "2019-07-28T21:03:07+00:00",
- "dataVersion": "1.0",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
- ]
- ```
-
-1. Run the following command to publish event:
-
- ```sh
- curl -k -H "Content-Type: application/json" -X POST -g -d @event4.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4/events?api-version=2019-01-01-preview
- ```
-
-## Verify event delivery
-
-See the IoT Hub [routing tutorial](../../iot-hub/tutorial-routing.md) for the steps to view the events.
-
-## Cleanup resources
-
-* Run the following command to delete the topic and all its subscriptions at the edge:
-
- ```sh
- curl -k -H "Content-Type: application/json" -X DELETE https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic4?api-version=2019-01-01-preview
- ```
-* Delete any resources created while setting up IoTHub routing in the cloud as well.
-
-## Next steps
-
-In this tutorial, you created an Event Grid topic, edge hub subscription, and published events. Now that you know the basic steps to forward to an edge hub, see the following articles:
-
-* To troubleshoot issues with using Azure Event Grid on IoT Edge, see [Troubleshooting guide](troubleshoot.md).
-* Use [edge hub](../../iot-edge/module-composition.md) route filters to partition events
-* Set up persistence of Event Grid module on [linux](persist-state-linux.md) or [Windows](persist-state-windows.md)
-* Follow [documentation](configure-client-auth.md) to configure client authentication
-* Forward events to Azure Event Grid in the cloud by following this [tutorial](forward-events-cloud.md)
-* [Monitor topics and subscriptions on the edge](monitor-topics-subscriptions.md)
event-grid Monitor Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/monitor-topics-subscriptions.md
- Title: Monitor topics and event subscriptions - Azure Event Grid IoT Edge | Microsoft Docs
-description: Monitor topics and event subscriptions
Previously updated : 05/10/2021----
-# Monitor topics and event subscriptions
-
-Event Grid on Edge exposes a number of metrics for topics and event subscriptions in the [Prometheus exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/). This article describes the available metrics and how to enable them.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Enable metrics
-
-Configure the module to emit metrics by setting the `metrics__reporterType` environment variable to `prometheus` in the container create options:
-
- ```json
- {
- "Env": [
- "metrics__reporterType=prometheus"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
-Metrics will be available at `5888/metrics` of the module for http and `4438/metrics` for https. For example, `http://<modulename>:5888/metrics?api-version=2019-01-01-preview` for http. At this point, a metrics module can poll the endpoint to collect metrics as in this [example architecture](https://github.com/veyalla/ehm).
-
-## Available metrics
-
-Both topics and event subscriptions emit metrics to give you insights into event delivery and module performance.
-
-### Topic metrics
-
-| Metric | Description |
-| | -- |
-| EventsReceived | Number of events published to the topic
-| UnmatchedEvents | Number of events published to the topic that do not match an Event Subscription and are dropped
-| SuccessRequests | Number of inbound publish requests received by the topic
-| SystemErrorRequests | Number of inbound publish requests failed due to an internal system error
-| UserErrorRequests | Number on inbound publish requests failed due to user error such as malformed JSON
-| SuccessRequestLatencyMs | Publish request response latency in milliseconds
--
-### Event subscription metrics
-
-| Metric | Description |
-| | -- |
-| DeliverySuccessCounts | Number of events successfully delivered to the configured endpoint
-| DeliveryFailureCounts | Number of events that failed to be delivered to the configured endpoint
-| DeliverySuccessLatencyMs | Latency of events successfully delivered in milliseconds
-| DeliveryFailureLatencyMs | Latency of events delivery failures in milliseconds
-| SystemDelayForFirstAttemptMs | System delay of events before first delivery attempt in milliseconds
-| DeliveryAttemptsCount | Number of event delivery attempts - success and failure
-| ExpiredCounts | Number of events that expired and were not delivered to the configured endpoint
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/overview.md
- Title: Event driven architectures on edge ΓÇö Azure Event Grid on IoT Edge
-description: Use Azure Event Grid as a module on IoT Edge for forward events between modules, edge devices, and the cloud.
- Previously updated : 02/15/2022---
-# What is Azure Event Grid on Azure IoT Edge?
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
-
-Event Grid on IoT Edge brings the power and flexibility of Azure Event Grid to the edge. Create topics, publish events, and subscribe multiple destinations whether they're modules on the same device, other edge devices, or services in the cloud.
-
-As in the cloud, the Event Grid on IoT Edge module handles routing, filtering, and reliable delivery of events at scale. Filter events to ensure that only relevant events are sent to different event handlers using advanced string, numerical, and boolean filters. Retry logic makes sure that the event reaches the target destination even if it's not available at the time of publish. It allows you to use Event Grid on IoT Edge as a powerful store and forward mechanism.
-
-Event Grid on IoT Edge supports both CloudEvents v1.0 and custom event schemas. It also supports the same Pub/Sub semantics as Event Grid in the cloud for easy interoperability.
-
-This article provides an overview of Azure Event Grid on IoT Edge. For step-by-step instructions to use this module on edge, see [Publish, subscribe to events locally](pub-sub-events-webhook-local.md).
-
-![Event Grid on IoT Edge model of sources and handlers](../media/edge-overview/functional-model.png)
-
-This image shows some of the ways you can use Event Grid on IoT Edge, and isn't a comprehensive list of supported functionality.
-
-## When to use Event Grid on IoT Edge
-
-Event Grid on IoT Edge provides an easy to use, reliable eventing model for between the edge and the cloud.
-
-Event Grid on IoT Edge is built with a symmetrical runtime surface area to the Azure cloud service, so you can use the same events and API calls wherever you need. Whether you do pub/sub in the cloud, on the edge, or between the two, Event Grid on IoT Edge can now be your one go-to solution.
-
-Use Event Grid on IoT Edge to trigger simple workflows between modules. For example, create a topic and publish "storage blob created" events from your storage module to the topic. You can now subscribe one or several functions or custom modules to that topics.
-
-Extend your functionality between edge devices. If you're publishing blob module events and want to use the computational power of multiple near by edge devices, create cross-device subscriptions.
-
-Finally, connect to the cloud. If your blob module events are to be periodically synced to the cloud, use the greater compute available on the cloud, or send processed data up, create additional cloud service subscriptions.
-
-Event Grid on IoT Edge provides a flexible and reliable decoupled eventing architecture.
-
-## Event sources
-
-Much like in the cloud, Event Grid on IoT Edge allows direct integration between modules to build event driven architectures. Currently, the events can be sent to Event Grid on IoT Edge from:
-
-* Azure Blob Storage on IoT Edge
-* CloudEvents sources
-* Custom modules & containers via HTTP POST
-
-## Event handlers
-
-Event Grid on IoT Edge is built to send events to anywhere you want. Currently, the following destinations are supported:
-
-* Other modules including IoT Hub, functions, and custom modules
-* Other edge devices
-* WebHooks
-* Azure Event Grid cloud service
-* Event Hubs
-* Service Bus Queues
-* Service Bus Topics
-* Storage Queues
-
-## Supported environments
-Currently, Windows 64-bit, Linux 64-bit, and ARM 32-bit environments are supported.
-
-## Concepts
-
-There are five concepts in Azure Event Grid that let you get started:
-
-* **Events** ΓÇö What happened.
-* **Event sources** ΓÇö Where the event took place.
-* **Topics** ΓÇö The endpoint where publishers send events.
-* **Event subscriptions** ΓÇö The endpoint or built-in mechanism to route events, sometimes to more than one handler. Subscriptions are also used by handlers to intelligently filter incoming events.
-* **Event handlers** ΓÇö The app or service that reacts to the event.
-
-## Cost
-
-Event Grid on IoT Edge is free during public preview.
-
-## Issues
-Report any issues with using Event Grid on IoT Edge at [https://github.com/Azure/event-grid-iot-edge/issues](https://github.com/Azure/event-grid-iot-edge/issues).
-
-## Next steps
-
-* [Publish, subscribe to events locally](pub-sub-events-webhook-local.md)
-* [Publish, subscribe to events in cloud](pub-sub-events-webhook-cloud.md)
-* [Forward events to Event Grid cloud](forward-events-cloud.md)
-* [Forward events to IoTHub](forward-events-iothub.md)
-* [React to Blob Storage events locally](react-blob-storage-events-locally.md)
event-grid Persist State Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/persist-state-linux.md
- Title: Persist state in Linux - Azure Event Grid IoT Edge | Microsoft Docs
-description: Persist metadata in Linux
----- Previously updated : 05/10/2021---
-# Persist state in Linux
-
-Topics and subscriptions created in the Event Grid module are stored in the container file system by default. Without persistence, if the module is redeployed, all the metadata created would be lost. To preserve the data across deployments and restarts, you need to persist the data outside the container file system.
-
-By default only metadata is persisted and events are still stored in-memory for improved performance. Follow the persist events section to enable event persistence as well.
-
-This article provides the steps to deploy the Event Grid module with persistence in Linux deployments.
-
-> [!NOTE]
->The Event Grid module runs as a low-privileged user with UID `2000` and name `eventgriduser`.
-
-## Persistence via volume mount
-
- [Docker volumes](https://docs.docker.com/storage/volumes/) are used to preserve the data across deployments. You can let docker automatically create a named volume as part of deploying the Event Grid module. This option is the simplest option. You can specify the volume name to be created in the **Binds** section as follows:
-
-```json
- {
- "HostConfig": {
- "Binds": [
- "<your-volume-name-here>:/app/metadataDb"
- ]
- }
- }
-```
-
->[!IMPORTANT]
->Do not change the second part of the bind value. It points to a specific location within the module. For the Event Grid module on Linux, it has to be **/app/metadataDb**.
-
-For example, the following configuration will result in the creation of the volume **egmetadataDbVol** where metadata will be persisted.
-
-```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "egmetadataDbVol:/app/metadataDb",
- "egdataDbVol:/app/eventsDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
-}
-```
-
-Instead of mounting a volume, you can create a directory on the host system and mount that directory.
-
-## Persistence via host directory mount
-
-Instead of a docker volume, you also have the option to mount a host folder.
-
-1. First create a user with name **eventgriduser** and ID **2000** on the host machine by running the following command:
-
- ```sh
- sudo useradd -u 2000 eventgriduser
- ```
-1. Create a directory on the host file system by running the following command.
-
- ```sh
- md <your-directory-name-here>
- ```
-
- For example, running the following command will create a directory called **myhostdir**.
-
- ```sh
- md /myhostdir
- ```
-1. Next, make **eventgriduser** owner of this folder by running the following command.
-
- ```sh
- sudo chown eventgriduser:eventgriduser -hR <your-directory-name-here>
- ```
-
- For example,
-
- ```sh
- sudo chown eventgriduser:eventgriduser -hR /myhostdir
- ```
-1. Use **Binds** to mount the directory and redeploy the Event Grid module from Azure portal.
-
- ```json
- {
- "HostConfig": {
- "Binds": [
- "<your-directory-name-here>:/app/metadataDb",
- "<your-directory-name-here>:/app/eventsDb",
- ]
- }
- }
- ```
-
- For example,
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "/myhostdir:/app/metadataDb",
- "/myhostdir2:/app/eventsDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
- >[!IMPORTANT]
- >Do not change the second part of the bind value. It points to a specific location within the module. For the Event Grid module on linux, it has to be **/app/metadataDb** and **/app/eventsDb**
--
-## Persist events
-
-To enable event persistence, you must first enable metadata persistence either via volume mount or host directory mount using the above sections.
-
-Important things to note about persisting events:
-
-* Persisting events is enabled on a per Event Subscription basis and is opt-in once a volume or directory has been mounted.
-* Event persistence is configured on an Event Subscription at creation time and cannot be modified once the Event Subscription is created. To toggle event persistence, you must delete and re-create the Event Subscription.
-* Persisting events is almost always slower than in memory operations, however the speed difference is highly dependent on the characteristics of the drive. The tradeoff between speed and reliability is inherent to all messaging systems but generally only becomes a noticeable at large scale.
-
-To enable event persistence on an Event Subscription, set `persistencePolicy` to `true`:
-
- ```json
- {
- "properties": {
- "persistencePolicy": {
- "isPersisted": "true"
- },
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-webhook-url>"
- }
- }
- }
- }
- ```
event-grid Persist State Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/persist-state-windows.md
- Title: Persist state in Windows - Azure Event Grid IoT Edge | Microsoft Docs
-description: Persist state in Windows
----- Previously updated : 02/15/2022---
-# Persist state in Windows
-
-Topics and subscriptions created in the Event Grid module are stored in the container file system by default. Without persistence, if the module is redeployed, all the metadata created would be lost. To preserve the data across deployments and restarts, you need to persist the data outside the container file system.
-
-By default only metadata is persisted and events are still stored in-memory for improved performance. Follow the persist events section to enable event persistence as well.
-
-This article provides the steps needed to deploy Event Grid module with persistence in Windows deployments.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-> [!NOTE]
->The Event Grid module runs as a low-privileged user **ContainerUser** in Windows.
-
-## Persistence via volume mount
-
-[Docker volumes](https://docs.docker.com/storage/volumes/) are used to preserve data across deployments. To mount a volume, you need to create it using docker commands, give permissions so that the container can read, write to it, and then deploy the module.
-
-1. Create a volume by running the following command:
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine volume create <your-volume-name-here>
- ```
-
- For example,
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine volume create myeventgridvol
- ```
-1. Get the host directory that the volume maps to by running the below command
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine volume inspect <your-volume-name-here>
- ```
-
- For example,
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine volume inspect myeventgridvol
- ```
-
- Sample Output:-
-
- ```json
- [
- {
- "CreatedAt": "2019-07-30T21:20:59Z",
- "Driver": "local",
- "Labels": {},
- "Mountpoint": "C:\\ProgramData\\iotedge-moby\u000bolumes\\myeventgridvol\\_data",
- "Name": "myeventgridvol",
- "Options": {},
- "Scope": "local"
- }
- ]
- ```
-1. Add the **Users** group to value pointed by **Mountpoint** as follows:
- 1. Launch File Explorer.
- 1. Navigate to the folder pointed by **Mountpoint**.
- 1. Right-click, and then select **Properties**.
- 1. Select **Security**.
- 1. Under *Group or user names, select **Edit**.
- 1. Select **Add**, enter `Users`, select **Check Names**, and select **Ok**.
- 1. Under *Permissions for Users*, select **Modify**, and select **Ok**.
-1. Use **Binds** to mount this volume and redeploy Event Grid module from Azure portal
-
- For example,
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "<your-volume-name-here>:C:\\app\\metadataDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
- >[!IMPORTANT]
- >Do not change the second part of the bind value. It points to a specific location in the module. For Event Grid module on windows, it has to be **C:\\app\\metadataDb**.
--
- For example,
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "myeventgridvol:C:\\app\\metadataDb",
- "C:\\myhostdir2:C:\\app\\eventsDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
-## Persistence via host directory mount
-
-Instead of mounting a volume, you can create a directory on the host system and mount that directory.
-
-1. Create a directory on the host filesystem by running the following command.
-
- ```sh
- mkdir <your-directory-name-here>
- ```
-
- For example,
-
- ```sh
- mkdir C:\myhostdir
- ```
-1. Use **Binds** to mount your directory and redeploy the Event Grid module from Azure portal.
-
- ```json
- {
- "HostConfig": {
- "Binds": [
- "<your-directory-name-here>:C:\\app\\metadataDb"
- ]
- }
- }
- ```
-
- >[!IMPORTANT]
- >Do not change the second part of the bind value. It points to a specific location in the module. For the Event Grid module on windows, it has to be **C:\\app\\metadataDb**.
-
- For example,
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "Binds": [
- "C:\\myhostdir:C:\\app\\metadataDb",
- "C:\\myhostdir2:C:\\app\\eventsDb"
- ],
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-## Persist events
-
-To enable event persistence, you must first enable events persistence either via volume mount or host directory mount using the above sections.
-
-Important things to note about persisting events:
-
-* Persisting events is enabled on a per Event Subscription basis and is opt-in once a volume or directory has been mounted.
-* Event persistence is configured on an Event Subscription at creation time and cannot be modified once the Event Subscription is created. To toggle event persistence, you must delete and re-create the Event Subscription.
-* Persisting events is almost always slower than in memory operations, however the speed difference is highly dependent on the characteristics of the drive. The tradeoff between speed and reliability is inherent to all messaging systems but only becomes a noticeable at large scale.
-
-To enable event persistence on an Event Subscription, set `persistencePolicy` to `true`:
-
- ```json
- {
- "properties": {
- "persistencePolicy": {
- "isPersisted": "true"
- },
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-webhook-url>"
- }
- }
- }
- }
- ```
event-grid Pub Sub Events Webhook Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/pub-sub-events-webhook-cloud.md
- Title: Publish, subscribe to events in cloud - Azure Event Grid IoT Edge | Microsoft Docs
-description: Publish, subscribe to events in cloud using Webhook with Event Grid on IoT Edge
----- Previously updated : 02/15/2022----
-# Tutorial: Publish, subscribe to events in cloud
-
-This article walks through all the steps needed to publish and subscribe to events using Event Grid on IoT Edge. This tutorial uses and Azure Function as the Event Handler. For more destination types, see [event handlers](event-handlers.md).
-
-See [Event Grid Concepts](concepts.md) to understand what an Event Grid topic and subscription are before proceeding.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## Prerequisites
-In order to complete this tutorial, you need:
-
-* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one.
-* **Azure IoT Hub and IoT Edge device** - Follow the steps in the quick start for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one.
--
-## Create an Azure function in the Azure portal
-
-Follow the steps outlined in the [tutorial](../../azure-functions/functions-get-started.md) to create an Azure function.
-
-Replace the code snippet with the following code:
-
-```csharp
-#r "Newtonsoft.Json"
-
-using System.Net;
-using Microsoft.AspNetCore.Mvc;
-using Microsoft.Extensions.Primitives;
-using Newtonsoft.Json;
-
-public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
-{
- log.LogInformation("C# HTTP trigger function processed a request.");
-
- string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
- dynamic data = JsonConvert.DeserializeObject(requestBody);
-
- log.LogInformation($"C# HTTP trigger received {data}.");
- return data != null
- ? (ActionResult)new OkResult()
- : new BadRequestObjectResult("Please pass in the request body");
-}
-```
-
-In your new function, select **Get function URL** at the top right, select default (**Function key**), and then select **Copy**. You use the function URL value later in the tutorial.
-
-> [!NOTE]
-> Refer to the [Azure Functions](../../azure-functions/functions-overview.md) documentation for more samples and tutorials on reacting to events and using EventGrid event triggers.
-
-## Create a topic
-
-As a publisher of an event, you need to create an Event Grid topic. Topic refers to an end point where publishers can send events to.
-
-1. Create topic2.json with the following content. See our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "name": "sampleTopic2",
- "properties": {
- "inputschema": "eventGridSchema"
- }
- }
- ```
-1. Run the following command to create the topic. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @topic2.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2?api-version=2019-01-01-preview
- ```
-1. Run the following command to verify topic was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- [
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic2",
- "name": "sampleTopic2",
- "type": "Microsoft.EventGrid/topics",
- "properties": {
- "endpoint": "https://<edge-vm-ip>:4438/topics/sampleTopic2/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema"
- }
- }
- ]
- ```
-
-## Create an event subscription
-
-Subscribers can register for events published to a topic. To receive any event, the subscribers need to create an Event Grid subscription on a topic of interest.
--
-1. Create subscription2.json with the following content. Refer to our [API documentation](api.md) for details about the payload.
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-az-func-cloud-url>"
- }
- }
- }
- }
- ```
-
- >[!NOTE]
- > The **endpointType** specifies that the subscriber is a Webhook. The **endpointUrl** specifies the URL at which the subscriber is listening for events. This URL corresponds to the Azure Function sample you setup earlier.
-2. Run the following command to create the subscription. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @subscription2.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2/eventSubscriptions/sampleSubscription2?api-version=2019-01-01-preview
- ```
-3. Run the following command to verify subscription was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2/eventSubscriptions/sampleSubscription2?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic2/eventSubscriptions/sampleSubscription2",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "name": "sampleSubscription2",
- "properties": {
- "Topic": "sampleTopic2",
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "<your-az-func-cloud-url>"
- }
- }
- }
- }
- ```
-
-## Publish an event
-
-1. Create event2.json with the following content. Refer to our [API documentation](api.md) for details about the payload.
-
- ```json
- [
- {
- "id": "eventId-func-1",
- "eventType": "recordInserted",
- "subject": "myapp/vehicles/motorcycles",
- "eventTime": "2019-07-28T21:03:07+00:00",
- "dataVersion": "1.0",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
- ]
- ```
-1. Run the following command to publish event
-
- ```sh
- curl -k -H "Content-Type: application/json" -X POST -g -d @event2.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2/events?api-version=2019-01-01-preview
- ```
-
-## Verify event delivery
-
-You can view the event delivered in the Azure portal under the **Monitor** option of your function.
-
-## Cleanup resources
-
-* Run the following command to delete the topic and all its subscriptions
-
- ```sh
- curl -k -H "Content-Type: application/json" -X DELETE https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic2?api-version=2019-01-01-preview
- ```
-
-* Delete the Azure function created in the Azure portal.
-
-## Next steps
-
-In this tutorial, you created an Event Grid topic, subscription, and published events. Now that you know the basic steps, see the following articles:
-
-* To troubleshoot issues with using Azure Event Grid on IoT Edge, see [Troubleshooting guide](troubleshoot.md).
-* Create/update subscription with [filters](advanced-filtering.md).
-* Set up persistence of Event Grid module on [linux](persist-state-linux.md) or [Windows](persist-state-windows.md)
-* Follow [documentation](configure-client-auth.md) to configure client authentication
-* Forward events to Azure Event Grid in the cloud by following this [tutorial](forward-events-cloud.md)
-* [Monitor topics and subscriptions on the edge](monitor-topics-subscriptions.md)
event-grid Pub Sub Events Webhook Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/pub-sub-events-webhook-local.md
- Title: Publish, subscribe to events locally - Azure Event Grid IoT Edge | Microsoft Docs
-description: Publish, subscribe to events locally using Webhook with Event Grid on IoT Edge
----- Previously updated : 02/15/2022---
-# Tutorial: Publish, subscribe to events locally
-
-This article walks you through all the steps needed to publish and subscribe to events using Event Grid on IoT Edge.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-> [!NOTE]
-> To learn about Azure Event Grid topics and subscriptions, see [Event Grid Concepts](concepts.md).
-
-## Prerequisites
-In order to complete this tutorial, you need:
-
-* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one.
-* **Azure IoT Hub and IoT Edge device** - Follow the steps in the quickstart for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one.
-
-## Deploy Event Grid IoT Edge module
-
-There are several ways to deploy modules to an IoT Edge device and all of them work for Azure Event Grid on IoT Edge. This article describes the steps to deploy Event Grid on IoT Edge from the Azure portal.
-
->[!NOTE]
-> In this tutorial, you will deploy the Event Grid module without persistence. It means that any topics and subscriptions you create in this tutorial will be deleted when you redeploy the module. For more information on how to setup persistence, see the following articles: [Persist state in Linux](persist-state-linux.md) or [Persist state in Windows](persist-state-windows.md). For production workloads, we recommend that you install the Event Grid module with persistence.
--
-### Select your IoT Edge device
-
-1. Sign in to the [Azure portal](https://portal.azure.com)
-1. Navigate to your IoT Hub.
-1. Select **IoT Edge** from the menu in the **Automatic Device Management** section.
-1. Click on the ID of the target device from the list of devices
-1. Select **Set Modules**. Keep the page open. You'll continue with the steps in the next section.
-
-### Configure a deployment manifest
-
-A deployment manifest is a JSON document that describes which modules to deploy, how data flows between the modules, and desired properties of the module twins. The Azure portal has a wizard that walks you through creating a deployment manifest, instead of building the JSON document manually. It has three steps: **Add modules**, **Specify routes**, and **Review deployment**.
-
-### Add modules
-
-1. In the **Deployment Modules** section, select **Add**
-1. From the types of modules in the drop-down list, select **IoT Edge Module**
-1. Provide the name, image, container create options of the container:
-
- * **Name**: eventgridmodule
- * **Image URI**: `mcr.microsoft.com/azure-event-grid/iotedge:latest`
- * **Container Create Options**:
-
- [!INCLUDE [edge-module-version-update](../includes/edge-module-version-update.md)]
-
- ```json
- {
- "Env": [
- "inbound__clientAuth__clientCert__enabled=false"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
- 1. Click **Save**
- 1. Continue to the next section to add the Azure Event Grid Subscriber module before deploying them together.
-
- >[!IMPORTANT]
- > In this tutorial, you will deploy the Event Grid module with client authentication disabled. For production workloads, we recommend that you enable the client authentication. For more information on how to configure Event Grid module securely, see [Security and authentication](security-authentication.md).
- >
- > If you are using an Azure VM as an edge device, add an inbound port rule to allow inbound traffic on the port 4438. For instructions on adding the rule, see [How to open ports to a VM](../../virtual-machines/windows/nsg-quickstart-portal.md).
-
-
-## Deploy Event Grid Subscriber IoT Edge module
-
-This section shows you how to deploy another IoT module, which would act as an event handler to which events can be delivered.
-
-### Add modules
-
-1. In the **Deployment Modules** section, select **Add** again.
-1. From the types of modules in the drop-down list, select **IoT Edge Module**
-1. Provide the name, image, and container create options of the container:
-
- * **Name**: subscriber
- * **Image URI**: `mcr.microsoft.com/azure-event-grid/iotedge-samplesubscriber:latest`
- * **Container Create Options**: None
-1. Click **Save**
-1. Click **Next** to continue to the routes section
-
- ### Setup routes
-
-Keep the default routes, and select **Next** to continue to the review section
-
-### Submit the deployment request
-
-1. The review section shows you the JSON deployment manifest that was created based on your selections in the previous section. Confirm that you see both the modules: **eventgridmodule** and **subscriber** listed in the JSON.
-1. Review your deployment information, then select **Submit**. After you submit the deployment, you return to the **device** page.
-1. In the **Modules section**, verify that both **eventgrid** and **subscriber** modules are listed. And, verify that the **Specified in deployment** and **Reported by device** columns are set to **Yes**.
-
- It may take a few moments for the module to be started on the device and then reported back to IoT Hub. Refresh the page to see an updated status.
-
-## Create a topic
-
-As a publisher of an event, you need to create an Event Grid topic. In Azure Event Grid, a topic refers to an endpoint where publishers can send events to.
-
-1. Create topic.json with the following content. For details about the payload, see our [API documentation](api.md).
-
- ```json
- {
- "name": "sampleTopic1",
- "properties": {
- "inputschema": "eventGridSchema"
- }
- }
- ```
-
-1. Run the following command to create an Event Grid topic. Confirm that you see the HTTP status code is `200 OK`.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @topic.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic1?api-version=2019-01-01-preview
- ```
-
-1. Run the following command to verify topic was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic1?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- [
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic1",
- "name": "sampleTopic1",
- "type": "Microsoft.EventGrid/topics",
- "properties": {
- "endpoint": "https://<edge-vm-ip>:4438/topics/sampleTopic1/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema"
- }
- }
- ]
- ```
-
-## Create an event subscription
-
-Subscribers can register for events published to a topic. To receive any event, you need to create an Event Grid subscription for a topic of interest.
--
-1. Create subscription.json with the following content. For details about the payload, see our [API documentation](api.md)
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "https://subscriber:4430"
- }
- }
- }
- }
- ```
-
- >[!NOTE]
- > The **endpointType** property specifies that the subscriber is a **Webhook**. The **endpointUrl** specifies the URL at which the subscriber is listening for events. This URL corresponds to the Azure Subscriber sample you deployed earlier.
-2. Run the following command to create a subscription for the topic. Confirm that you see the HTTP status code is `200 OK`.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @subscription.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic1/eventSubscriptions/sampleSubscription1?api-version=2019-01-01-preview
- ```
-3. Run the following command to verify subscription was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic1/eventSubscriptions/sampleSubscription1?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/sampleTopic1/eventSubscriptions/sampleSubscription1",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "name": "sampleSubscription1",
- "properties": {
- "Topic": "sampleTopic1",
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "https://subscriber:4430"
- }
- }
- }
- }
- ```
-
-## Publish an event
-
-1. Create event.json with the following content. For details about the payload, see our [API documentation](api.md).
-
- ```json
- [
- {
- "id": "eventId-func-0",
- "eventType": "recordInserted",
- "subject": "myapp/vehicles/motorcycles",
- "eventTime": "2019-07-28T21:03:07+00:00",
- "dataVersion": "1.0",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
- ]
- ```
-1. Run the following command to publish an event.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X POST -g -d @event.json https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic1/events?api-version=2019-01-01-preview
- ```
-
-## Verify event delivery
-
-1. SSH or RDP into your IoT Edge VM.
-1. Check the subscriber logs:
-
- On Windows, run the following command:
-
- ```sh
- docker -H npipe:////./pipe/iotedge_moby_engine container logs subscriber
- ```
-
- On Linux, run the following command:
-
- ```sh
- sudo docker logs subscriber
- ```
-
- Sample output:
-
- ```sh
- Received Event:
- {
- "id": "eventId-func-0",
- "topic": "sampleTopic1",
- "subject": "myapp/vehicles/motorcycles",
- "eventType": "recordInserted",
- "eventTime": "2019-07-28T21:03:07+00:00",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "data": {
- "make": "Ducati",
- "model": "Monster"
- }
- }
- ```
-
-## Cleanup resources
-
-* Run the following command to delete the topic and all its subscriptions.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X DELETE https://<your-edge-device-public-ip-here>:4438/topics/sampleTopic1?api-version=2019-01-01-preview
- ```
-* Delete the subscriber module from your IoT Edge device.
--
-## Next steps
-In this tutorial, you created an Event Grid topic, subscription, and published events. Now that you know the basic steps, see the following articles:
--- To troubleshoot issues with using Azure Event Grid on IoT Edge, see [Troubleshooting guide](troubleshoot.md).-- Create/update subscription with [filters](advanced-filtering.md).-- Enable persistence of Event Grid module on [Linux](persist-state-linux.md) or [Windows](persist-state-windows.md)-- Follow [documentation](configure-client-auth.md) to configure client authentication-- Forward events to Azure Functions in the cloud by following this [tutorial](pub-sub-events-webhook-cloud.md)-- [React to Blob Storage events on IoT Edge](react-blob-storage-events-locally.md)-- [Monitor topics and subscriptions on the edge](monitor-topics-subscriptions.md)-
event-grid React Blob Storage Events Locally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/react-blob-storage-events-locally.md
- Title: React to Blob Storage module events - Azure Event Grid IoT Edge | Microsoft Docs
-description: React to Blob Storage module events
----- Previously updated : 02/15/2022---
-# Tutorial: React to Blob Storage events on IoT Edge (Preview)
-This article shows you how to deploy the Azure Blob Storage on IoT module, which would act as an Event Grid publisher to send events on Blob creation and Blob deletion to Event Grid.
-
-For an overview of the Azure Blob Storage on IoT Edge, see [Azure Blob Storage on IoT Edge](../../iot-edge/how-to-store-data-blob.md) and its features.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-> [!WARNING]
-> Azure Blob Storage on IoT Edge integration with Event Grid is in Preview
-
-In order to complete this tutorial, you'll need:
-
-* **Azure subscription** - Create a [free account](https://azure.microsoft.com/free) if you don't already have one.
-* **Azure IoT Hub and IoT Edge device** - Follow the steps in the quickstart for [Linux](../../iot-edge/quickstart-linux.md) or [Windows devices](../../iot-edge/quickstart.md) if you don't already have one.
-
-## Deploy Event Grid IoT Edge module
-
-There are several ways to deploy modules to an IoT Edge device and all of them work for Azure Event Grid on IoT Edge. This article describes the steps to deploy Event Grid on IoT Edge from the Azure portal.
-
->[!NOTE]
-> In this tutorial, you will deploy the Event Grid module without persistence. It means that any topics and subscriptions you create in this tutorial will be deleted when you redeploy the module. For more information on how to setup persistence, see the following articles: [Persist state in Linux](persist-state-linux.md) or [Persist state in Windows](persist-state-windows.md). For production workloads, we recommend that you install the Event Grid module with persistence.
--
-### Select your IoT Edge device
-
-1. Sign in to the [Azure portal](https://portal.azure.com)
-1. Navigate to your IoT Hub.
-1. Select **IoT Edge** from the menu in the **Automatic Device Management** section.
-1. Select the ID of the target device from the list of devices
-1. Select **Set Modules**. Keep the page open. You'll continue with the steps in the next section.
-
-### Configure a deployment manifest
-
-A deployment manifest is a JSON document that describes which modules to deploy, how data flows between the modules, and desired properties of the module twins. The Azure portal has a wizard that walks you through creating a deployment manifest, instead of building the JSON document manually. It has three steps: **Add modules**, **Specify routes**, and **Review deployment**.
-
-### Add modules
-
-1. In the **Deployment Modules** section, select **Add**
-1. From the types of modules in the drop-down list, select **IoT Edge Module**
-1. Provide the name, image, container create options of the container:
-
- * **Name**: eventgridmodule
- * **Image URI**: `mcr.microsoft.com/azure-event-grid/iotedge:latest`
- * **Container Create Options**:
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=enabled",
- "inbound__clientAuth__clientCert__enabled=false"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- }
- ```
-
- 1. Select **Save**
- 1. Continue to the next section to add the Azure Event Grid Subscriber module before deploying them together.
-
- >[!IMPORTANT]
- > In this tutorial, you will learn to deploy the Event Grid module to allow both HTTP/HTTPs requests, client authentication disabled. For production workloads, we recommend that you enable only HTTPs requests and subscribers with client authentication enabled. For more information on how to configure Event Grid module securely, see [Security and authentication](security-authentication.md).
-
-
-## Deploy Event Grid Subscriber IoT Edge module
-
-This section shows you how to deploy another IoT module which would act as an event handler to which events can be delivered.
-
-### Add modules
-
-1. In the **Deployment Modules** section, select **Add** again.
-1. From the types of modules in the drop-down list, select **IoT Edge Module**
-1. Provide the name, image, and container create options of the container:
-
- * **Name**: subscriber
- * **Image URI**: `mcr.microsoft.com/azure-event-grid/iotedge-samplesubscriber:latest`
- * **Container Create Options**: None
-1. Select **Save**
-1. Continue to the next section to add the Azure Blob Storage module
-
-## Deploy Azure Blob Storage module
-
-This section shows you how to deploy the Azure Blob Storage module, which would act as an Event Grid publisher publishing Blob created and deleted events.
-
-### Add modules
-
-1. In the **Deployment Modules** section, select **Add**
-2. From the types of modules in the drop-down list, select **IoT Edge Module**
-3. Provide the name, image, and container create options of the container:
-
- * **Name**: `azureblobstorageoniotedge`
- * **Image URI**: `mcr.microsoft.com/azure-blob-storage:latest`
- * **Container Create Options**:
-
- ```json
- {
- "Env":[
- "LOCAL_STORAGE_ACCOUNT_NAME=<your storage account name>",
- "LOCAL_STORAGE_ACCOUNT_KEY=<your storage account key>",
- "EVENTGRID_ENDPOINT=http://<event grid module name>:5888"
- ],
- "HostConfig":{
- "Binds":[
- "<storage mount>"
- ],
- "PortBindings":{
- "11002/tcp":[{"HostPort":"11002"}]
- }
- }
- }
- ```
-
- > [!IMPORTANT]
- > - Blob Storage module can publish events using both HTTPS and HTTP.
- > - If you have enabled the client-based authentication for EventGrid, make sure you update the value of EVENTGRID_ENDPOINT to allow https, like this: `EVENTGRID_ENDPOINT=https://<event grid module name>:4438`.
- > - Also add another environment variable `AllowUnknownCertificateAuthority=true` to the above Json. When talking to EventGrid over HTTPS, **AllowUnknownCertificateAuthority** allows the storage module to trust self-signed EventGrid server certificates.
-
-4. Update the JSON that you copied with the following information:
-
- - Replace `<your storage account name>` with a name that you can remember. Account names should be 3 to 24 characters long, with lowercase letters and numbers. No spaces.
-
- - Replace `<your storage account key>` with a 64-byte base64 key. You can generate a key with tools like [GeneratePlus](https://generate.plus/en/base64?gp_base64_base[length]=64). You'll use these credentials to access the blob storage from other modules.
-
- - Replace `<event grid module name>` with the name of your Event Grid module.
- - Replace `<storage mount>` according to your container operating system.
- - For Linux containers, **my-volume:/blobroot**
- - For Windows containers,**my-volume:C:/BlobRoot**
-
-5. Select **Save**
-6. Select **Next** to continue to the routes section
-
- > [!NOTE]
- > If you are using an Azure VM as the edge device, add an inbound port rule to allow inbound traffic on the host ports used in this tutorial: 4438, 5888, 8080, and 11002. For instructions on adding the rule, see [How to open ports to a VM](../../virtual-machines/windows/nsg-quickstart-portal.md).
-
-### Setup routes
-
-Keep the default routes, and select **Next** to continue to the review section
-
-### Review deployment
-
-1. The review section shows you the JSON deployment manifest that was created based on your selections in the previous section. Confirm that you see the following four modules: **$edgeAgent**, **$edgeHub**, **eventgridmodule**, **subscriber** and **azureblobstorageoniotedge** that all being deployed.
-2. Review your deployment information, then select **Submit**.
-
-## Verify your deployment
-
-1. After you submit the deployment, you return to the IoT Edge page of your IoT hub.
-2. Select the **IoT Edge device** that you targeted with the deployment to open its details.
-3. In the device details, verify that the eventgridmodule, subscriber and azureblobstorageoniotedge modules are listed as both **Specified in deployment** and **Reported by device**.
-
- It may take a few moments for the module to be started on the device and then reported back to IoT Hub. Refresh the page to see an updated status.
-
-## Publish BlobCreated and BlobDeleted events
-
-1. This module automatically creates topic **MicrosoftStorage**. Verify that it exists
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- [
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/MicrosoftStorage",
- "name": "MicrosoftStorage",
- "type": "Microsoft.EventGrid/topics",
- "properties": {
- "endpoint": "https://<edge-vm-ip>:4438/topics/MicrosoftStorage/events?api-version=2019-01-01-preview",
- "inputSchema": "EventGridSchema"
- }
- }
- ]
- ```
-
- > [!IMPORTANT]
- > - For the HTTPS flow, if the client authentication is enabled via SAS key, then the SAS key specified earlier should be added as a header. Hence the curl request will be: `curl -k -H "Content-Type: application/json" -H "aeg-sas-key: <your SAS key>" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage?api-version=2019-01-01-preview`
- > - For the HTTPS flow, if the client authentication is enabled via certificate, the curl request will be: `curl -k -H "Content-Type: application/json" --cert <certificate file> --key <certificate private key file> -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage?api-version=2019-01-01-preview`
-
-2. Subscribers can register for events published to a topic. To receive any event, you'll need to create an Event Grid subscription for **MicrosoftStorage** topic.
- 1. Create blobsubscription.json with the following content. For details about the payload, see our [API documentation](api.md)
-
- ```json
- {
- "properties": {
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "https://subscriber:4430"
- }
- }
- }
- }
- ```
-
- >[!NOTE]
- > The **endpointType** property specifies that the subscriber is a **Webhook**. The **endpointUrl** specifies the URL at which the subscriber is listening for events. This URL corresponds to the Azure Function sample you deployed earlier.
-
- 2. Run the following command to create a subscription for the topic. Confirm that you see the HTTP status code is `200 OK`.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X PUT -g -d @blobsubscription.json https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage/eventSubscriptions/sampleSubscription5?api-version=2019-01-01-preview
- ```
-
- > [!IMPORTANT]
- > - For the HTTPS flow, if the client authentication is enabled via SAS key, then the SAS key specified earlier should be added as a header. Hence the curl request will be: `curl -k -H "Content-Type: application/json" -H "aeg-sas-key: <your SAS key>" -X PUT -g -d @blobsubscription.json https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage/eventSubscriptions/sampleSubscription5?api-version=2019-01-01-preview`
- > - For the HTTPS flow, if the client authentication is enabled via certificate, the curl request will be:`curl -k -H "Content-Type: application/json" --cert <certificate file> --key <certificate private key file> -X PUT -g -d @blobsubscription.json https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage/eventSubscriptions/sampleSubscription5?api-version=2019-01-01-preview`
-
- 3. Run the following command to verify subscription was created successfully. HTTP Status Code of 200 OK should be returned.
-
- ```sh
- curl -k -H "Content-Type: application/json" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage/eventSubscriptions/sampleSubscription5?api-version=2019-01-01-preview
- ```
-
- Sample output:
-
- ```json
- {
- "id": "/iotHubs/eg-iot-edge-hub/devices/eg-edge-device/modules/eventgridmodule/topics/MicrosoftStorage/eventSubscriptions/sampleSubscription5",
- "type": "Microsoft.EventGrid/eventSubscriptions",
- "name": "sampleSubscription5",
- "properties": {
- "Topic": "MicrosoftStorage",
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "https://subscriber:4430"
- }
- }
- }
- }
- ```
-
- > [!IMPORTANT]
- > - For the HTTPS flow, if the client authentication is enabled via SAS key, then the SAS key specified earlier should be added as a header. Hence the curl request will be: `curl -k -H "Content-Type: application/json" -H "aeg-sas-key: <your SAS key>" -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage/eventSubscriptions/sampleSubscription5?api-version=2019-01-01-preview`
- > - For the HTTPS flow, if the client authentication is enabled via certificate, the curl request will be: `curl -k -H "Content-Type: application/json" --cert <certificate file> --key <certificate private key file> -X GET -g https://<your-edge-device-public-ip-here>:4438/topics/MicrosoftStorage/eventSubscriptions/sampleSubscription5?api-version=2019-01-01-preview`
-
-3. Download [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) and [connect it to your local storage](../../iot-edge/how-to-store-data-blob.md#connect-to-your-local-storage-with-azure-storage-explorer)
-
-## Verify event delivery
-
-### Verify BlobCreated event delivery
-
-1. Upload files as block blobs to the local storage from Azure Storage Explorer, and the module will automatically publish create events.
-2. Check out the subscriber logs for create event. Follow the steps to [verify the event delivery](pub-sub-events-webhook-local.md#verify-event-delivery)
-
- Sample Output:
-
- ```json
- Received Event:
- {
- "id": "d278f2aa-2558-41aa-816b-e6d8cc8fa140",
- "topic": "MicrosoftStorage",
- "subject": "/blobServices/default/containers/cont1/blobs/Team.jpg",
- "eventType": "Microsoft.Storage.BlobCreated",
- "eventTime": "2019-10-01T21:35:17.7219554Z",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "data": {
- "api": "PutBlob",
- "clientRequestId": "00000000-0000-0000-0000-000000000000",
- "requestId": "ef1c387b-4c3c-4ac0-8e04-ff73c859bfdc",
- "eTag": "0x8D746B740DA21FB",
- "url": "http://azureblobstorageoniotedge:11002/myaccount/cont1/Team.jpg",
- "contentType": "image/jpeg",
- "contentLength": 858129,
- "blobType": "BlockBlob"
- }
- }
- ```
-
-### Verify BlobDeleted event delivery
-
-1. Delete blobs from the local storage using Azure Storage Explorer, and the module will automatically publish delete events.
-2. Check out the subscriber logs for delete event. Follow the steps to [verify the event delivery](pub-sub-events-webhook-local.md#verify-event-delivery)
-
- Sample Output:
-
- ```json
- Received Event:
- {
- "id": "ac669b6f-8b0a-41f3-a6be-812a3ce6ac6d",
- "topic": "MicrosoftStorage",
- "subject": "/blobServices/default/containers/cont1/blobs/Team.jpg",
- "eventType": "Microsoft.Storage.BlobDeleted",
- "eventTime": "2019-10-01T21:36:09.2562941Z",
- "dataVersion": "1.0",
- "metadataVersion": "1",
- "data": {
- "api": "DeleteBlob",
- "clientRequestId": "00000000-0000-0000-0000-000000000000",
- "requestId": "2996bbfb-c819-4d02-92b1-c468cc67d8c6",
- "eTag": "0x8D746B740DA21FB",
- "url": "http://azureblobstorageoniotedge:11002/myaccount/cont1/Team.jpg",
- "contentType": "image/jpeg",
- "contentLength": 858129,
- "blobType": "BlockBlob"
- }
- }
- ```
-
-Congratulations! You've completed the tutorial. The following sections provide details on the event properties.
-
-### Event properties
-
-Here's the list of supported event properties and their types and descriptions.
-
-| Property | Type | Description |
-| -- | - | -- |
-| `topic` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
-| `subject` | string | Publisher-defined path to the event subject. |
-| `eventType` | string | One of the registered event types for this event source. |
-| `eventTime` | string | The time the event is generated based on the provider's UTC time. |
-| `id` | string | Unique identifier for the event. |
-| `data` | object | Blob storage event data. |
-| `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. |
-| `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
-
-The data object has the following properties:
-
-| Property | Type | Description |
-| -- | - | -- |
-| api | string | The operation that triggered the event. It can be one of the following values: <ul><li>BlobCreated - allowed values are: `PutBlob` and `PutBlockList`</li><li>BlobDeleted - allowed values are `DeleteBlob`, `DeleteAfterUpload` and `AutoDelete`. <p>The `DeleteAfterUpload` event is generated when blob is automatically deleted because deleteAfterUpload desired property is set to true. </p><p>`AutoDelete` event is generated when blob is automatically deleted because deleteAfterMinutes desired property value expired.</p></li></ul>|
-| clientRequestId | string | a client-provided request ID for the storage API operation. This ID can be used to correlate to Azure Storage diagnostic logs using the "client-request-id" field in the logs, and can be provided in client requests using the "x-ms-client-request-id" header. For details, see [Log Format](/rest/api/storageservices/storage-analytics-log-format). |
-| requestId | string | Service-generated request ID for the storage API operation. Can be used to correlate to Azure Storage diagnostic logs using the "request-id-header" field in the logs and is returned from initiating API call in the 'x-ms-request-id' header. See [Log Format](/rest/api/storageservices/storage-analytics-log-format). |
-| eTag | string | The value that you can use to perform operations conditionally. |
-| contentType | string | The content type specified for the blob. |
-| contentLength | integer | The size of the blob in bytes. |
-| blobType | string | The type of blob. Valid values are either "BlockBlob" or "PageBlob". |
-| url | string | The path to the blob. <br>If the client uses a Blob REST API, then the url has this structure: `\<storage-account-name\>.blob.core.windows.net/\<container-name\>/\<file-name\>`. <br>If the client uses a Data Lake Storage REST API, then the url has this structure: `\<storage-account-name\>.dfs.core.windows.net/\<file-system-name\>/\<file-name\>`. |
--
-## Next steps
-
-See the following articles from the Blob Storage documentation:
--- [Filter Blob Storage events](../../storage/blobs/storage-blob-event-overview.md#filtering-events)-- [Recommended practices for consuming Blob Storage events](../../storage/blobs/storage-blob-event-overview.md#practices-for-consuming-events)-
-In this tutorial, you published events by creating or deleting blobs in an Azure Blob Storage. See the other tutorials to learn how to forward events to cloud (Azure Event Hubs or Azure IoT Hub):
--- [Forward events to Azure Event Grid](forward-events-cloud.md)-- [Forward events to Azure IoT Hub](forward-events-iothub.md)
event-grid Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/release-notes.md
- Title: Release Notes - Azure Event Grid IoT Edge | Microsoft Docs
-description: Azure Event Grid on IoT Edge Release Notes
Previously updated : 02/15/2022----
-# Release Notes: Azure Event Grid on IoT Edge
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## 1.0.0-preview1
-
-Initial release of Azure Event Grid on IoT Edge. Included features:
-
-* Topic creation
-* Event Subscription creation
-* Advanced Filters
-* Output batching
-* Retry policies
-* Module to module publishing
-* Publish to WebHook as a destination
-* Publish to IoT Edge Hub as a destination
-* Publish to Azure Event Grid cloud service as a destination
-* Persisted state for metadata
-* Blob storage module integration
-
-Tags: `1.0.0-preview1`
-
-## 1.0.0-preview2
-
-Preview 2 of Azure Event Grid on IoT Edge added:
-
-* Configurable persisting events to disk
-* Topic metrics
-* Event subscription metrics
-* Publish to Event Hubs as a destination
-* Publish to Service Bus Queues as a destination
-* Publish to Service Bus Topics as a destination
-* Publish to Storage Queues as a destination
-
-Tags: `1.0.0-preview2`, `1.0`, `latest`
event-grid Security Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/security-authentication.md
- Title: Security and authentication - Azure Event Grid IoT Edge | Microsoft Docs
-description: Security and authentication in Event Grid on IoT Edge.
----- Previously updated : 02/15/2022---
-# Security and authentication
-
-Security and authentication is an advanced concept and it requires familiarity with Event Grid basics first. Start [here](concepts.md) if you are new to Event Grid on IoT Edge. Event Grid module builds on the existing security infrastructure on IoT Edge. Refer to [this documentation](../../iot-edge/security.md) for details and setup.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
--
-The following sections describe in detail how these settings are secured and authenticated:
-
-* TLS configuration
-* Inbound client authentication
-* Outbound server authentication
-* Outbound client authentication
-
->[!IMPORTANT]
->Event Grid module security and authentication leverage's the existing infrastructure available on IoT Edge. The assumption is that IoT Edge sub system is secure.
-
->[!IMPORTANT]
->Event Grid configuration is **secure by default**. The following subsections explain all the options and possible value(s) that you can use to override aspects of authentication. Understand the impact before making any changes. For any changes to take effect, the Event Grid module will need to be redeployed.
-
-## TLS configuration (a.k.a server authentication)
-
-Event Grid module hosts both HTTP and HTTPS endpoints. Every IoT Edge module is assigned a server certificate by the IoT Edge's security daemon. We use the server certificate to secure the endpoint. On expiration, the module automatically refreshes with a new certificate from the IoT Edge security daemon.
-
-By default, only HTTPS communication is allowed. You can override this behavior via **inbound__serverAuth__tlsPolicy** configuration. The following table captures the possible value(s) of this property.
-
-| Possible Value(s) | Description |
-| - | |
-| Strict | Default. Enables HTTPS only
-| Enabled | Enables both HTTP and HTTPS
-| Disabled | Enables HTTP Only
-
-## Inbound client authentication
-
-Clients are entities doing management and/or runtime operations. Clients can be other IoT Edge modules, non-IoT Applications.
-
-Event Grid module supports two types of client authentication:
-
-* Shared Access Signature (SAS) key-based
-* certificate-based
-
-By default, the Event Grid module is configured to accept only certificate-based authentication. On startup, Event Grid module retrieves "TrustBundle" from IoT Edge security daemon and uses it to validate any client certificate. Client certificates that do not resolve to this chain will be rejected with `UnAuthorized`.
-
-### Certificate-based client authentication
-
-Certificate-based authentication is on by default. You can choose to disable certificate-based authentication via the property
-**inbound__clientAuth__clientCert__enabled**. The following table captures possible value(s).
-
-| Possible Value(s) | Description |
-| - | |
-| true | Default. Requires all requests into the Event Grid module to present a client certificate. Additionally, you will need to configure **inbound__clientAuth__clientCert__source**.
-| false | Don't force a client to present certificate.
-
-The following table captures possible value(s) for **inbound__clientAuth__clientCert__source**
-
-| Possible Value(s) | Description |
-| - | |
-| IoT Edge | Default. Uses the IoT Edge's Trustbundle to validate all client certificates.
-
-If a client presents a self-signed, by default, the Event Grid module will reject such requests. You can choose to allow self-signed client certificates via **inbound__clientAuth__clientCert__allowUnknownCA** property. The following table captures possible value(s).
-
-| Possible Value(s) | Description |
-| - | |
-| true | Default. Allows self-signed certificates to be presented successfully.
-| false | Will fail requests if self-signed certificates are presented.
-
->[!IMPORTANT]
->In production scenarios, you may want to set **inbound__clientAuth__clientCert__allowUnknownCA** to **false**.
-
-### SAS key-based client authentication
-
-In addition to certificate-based authentication, the Event Grid module can also do SAS Key-based authentication. SAS key is like a secret configured in the Event Grid module that it should use to validate all incoming calls. Clients need to specify the secret in the HTTP Header 'aeg-sas-key'. Request will be rejected with `UnAuthorized` if it doesn't match.
-
-The configuration to control SAS key-based authentication is
-**inbound__clientAuth__sasKeys__enabled**.
-
-| Possible Value(s) | Description |
-| - | |
-| true | Allows SAS key-based authentication. Requires **inbound__clientAuth__sasKeys__key1** or **inbound__clientAuth__sasKeys__key2**
-| false | Default. SAS Key based authentication is disabled.
-
- **inbound__clientAuth__sasKeys__key1** and **inbound__clientAuth__sasKeys__key2**
- are keys that you configure the Event Grid module to check against incoming requests. At least one of the keys needs to be configured. Client making the request will need to present the key as part of the request header '**aeg-sas-key**'. If both the keys are configured, the client can present either one of the keys.
-
-> [!NOTE]
->You can configure both authentication methods. In such a case SAS key is checked first and only if that fails, the certificate-based authentication is performed. For a request to succeed, only one of the authentication methods needs to succeed.
-
-## Outbound client authentication
-
-Client in outbound context refers to Event Grid module. The operation being done is delivering events to subscribers. Subscribing modules are considered as the server.
-
-Every IoT Edge module is assigned an Identity Certificate by the IoT Edge's security daemon. We use the identity certificate for outgoing calls. On expiration, the module automatically refreshes with a new certificate from the IoT Edge security daemon.
-
-The configuration to control outbound client authentication is
-**outbound__clientAuth__clientCert__enabled**.
-
-| Possible Value(s) | Description |
-| - | |
-| true | Default. Requires all outgoing requests from Event Grid module to present a certificate. Needs to configure **outbound__clientAuth__clientCert__source**.
-| false | Don't require Event Grid module to present its certificate.
-
-The configuration that controls the source for the certificate is
-**outbound__clientAuth__clientCert__source**.
-
-| Possible Value(s) | Description |
-| - | |
-| IoT Edge | Default. Uses the module's identity certificate configured by IoT Edge security daemon.
-
-### Outbound Server Authentication
-
-One of the destination types for an Event Grid subscriber is "Webhook". By default only HTTPS endpoints are accepted for such subscribers.
-
-The configuration to control webhook destination policy **outbound__webhook__httpsOnly**.
-
-| Possible Value(s) | Description |
-| - | |
-| true | Default. Allows only subscribers with HTTPS endpoint.
-| false | Allows subscribers with either HTTP or HTTPS endpoint.
-
-By default, Event Grid module will validate the subscriber's server certificate. You can skip validation by overriding **outbound__webhook__skipServerCertValidation**. Possible values are:
-
-| Possible Value(s) | Description |
-| - | |
-| true | Don't validate subscriber's server certificate.
-| false | Default. Validate subscriber's server certificate.
-
-If subscriber's certificate is self-signed, then by default Event Grid module will reject such subscribers. To allow self-signed certificate, you can override **outbound__webhook__allowUnknownCA**. The following table captures the possible value(s).
-
-| Possible Value(s) | Description |
-| - | |
-| true | Default. Allows self-signed certificates to be presented successfully.
-| false | Will fail requests if self-signed certificates are presented.
-
->[!IMPORTANT]
->In production scenarios you will want to set **outbound__webhook__allowUnknownCA** to **false**.
-
-> [!NOTE]
->IoT Edge environment generates self-signed certificates. Recommendation is to generate certificates issued by authorized CAs for production workloads and set **allowUnknownCA** property on both inbound and outbound to **false**.
-
-## Summary
-
-An Event Grid module is **secure by default**. We recommend keeping these defaults for your production deployments.
-
-The following are the guiding principles to use while configuring:
-
-* Allow only HTTPS requests into the module.
-* Allow only certificate-based client authentication. Allow only those certificates that are issued by well-known CAs. Disallow self-signed certificates.
-* Disallow SASKey based client authentication.
-* Always present Event Grid module's identity certificate on outgoing calls.
-* Allow only HTTPS subscribers for Webhook destination types.
-* Always validate subscriber's server certificate for Webhook destination types. Allow only certificates issued by well-known CAs. Disallow self-signed certificates.
-
-By default, Event Grid module is deployed with the following configuration:
-
- ```json
- {
- "Env": [
- "inbound__serverAuth__tlsPolicy=strict",
- "inbound__serverAuth__serverCert__source=IoTEdge",
- "inbound__clientAuth__sasKeys__enabled=false",
- "inbound__clientAuth__clientCert__enabled=true",
- "inbound__clientAuth__clientCert__source=IoTEdge",
- "inbound__clientAuth__clientCert__allowUnknownCA=true",
- "outbound__clientAuth__clientCert__enabled=true",
- "outbound__clientAuth__clientCert__source=IoTEdge",
- "outbound__webhook__httpsOnly=true",
- "outbound__webhook__skipServerCertValidation=false",
- "outbound__webhook__allowUnknownCA=true"
- ],
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
-}
-```
event-grid Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/troubleshoot.md
- Title: Troubleshoot - Azure Event Grid IoT Edge | Microsoft Docs
-description: Troubleshooting in Event Grid on IoT Edge.
----- Previously updated : 02/15/2022---
-# Common Issues
-
-If you experience issues using Azure Event Grid on IoT Edge in your environment, use this article as a guide for troubleshooting and resolution.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-## View Event Grid module logs
-
-To troubleshoot, you might need to access Event Grid module logs. On the VM where the module is deployed run the following command:
-
-On Windows,
-
-```sh
-docker -H npipe:////./pipe/iotedge_moby_engine container logs eventgridmodule
-```
-
-On Linux,
-
-```sh
-sudo docker logs eventgridmodule
-```
-
-## Unable to make HTTPS requests
-
-* First make sure Event Grid module has **inbound:serverAuth:tlsPolicy** set to **strict** or **enabled**.
-
-* In case of module-to-module communication, make sure that you are making the call on port **4438** and the name of the module matches what is deployed.
-
- For example, if Event Grid module was deployed with name **eventgridmodule** then your URL should be **https://eventgridmodule:4438**. Make sure casing and port number are correct.
-
-* If it's from non-IoT module, make sure Event Grid port is mapped on to the Host machine during deployment for example,
-
- ```json
- "HostConfig": {
- "PortBindings": {
- "4438/tcp": [
- {
- "HostPort": "4438"
- }
- ]
- }
- }
- ```
-
-## Unable to make HTTP requests
-
-* First make sure Event Grid module has **inbound:serverAuth:tlsPolicy** set to **enabled** or **disabled**.
-
-* In case of module-to-module communications, make sure that you are making the call on port **5888** and the name of the module matches what is deployed.
-
- For example, if Event Grid module was deployed with name **eventgridmodule** then your URL should be **http://eventgridmodule:5888**. Make sure casing and port number are correct.
-
-* If it's from non-IoT module, make sure Event Grid port is mapped on to the Host machine during deployment for example,
-
- ```json
- "HostConfig": {
- "PortBindings": {
- "5888/tcp": [
- {
- "HostPort": "5888"
- }
- ]
- }
- }
- ```
-
-## Certificate chain was issued by an authority that's not trusted
-
-By default, Event Grid module is configured to authenticate clients with certificate issued by the IoT Edge security daemon. Make sure the client is presenting a certificate that is rooted to this chain.
-
-**IoTSecurity** class in [https://github.com/Azure/event-grid-iot-edge](https://github.com/Azure/event-grid-iot-edge) shows how to retrieve certificates from IoT Edge Security daemon and use that to configure outgoing calls.
-
-If it's a non-production environment, you have the option to turn off client authentication. For more information, see [Security and Authentication](security-authentication.md).
-
-## Debug Events not received by subscriber
-
-Typical reasons for this are:
-
-* The event was never successfully posted. The client should have received an HTTP StatusCode of 200(OK) on posting an event to Event Grid module.
-
-* Check the event subscription to verify:
- * Endpoint URL is valid
- * Any filters in the subscription are not causing the event to be "dropped".
-
-* Verify if the subscriber module is running
-
-* Log on to the VM where Event Grid module is deployed and view its logs.
-
-* Turn on per delivery logging by setting **broker:logDeliverySuccess=true** and redeploying Event Grid module and retrying the request. Turning on logging per delivery can impact throughput and latency so once debugging is complete our recommendation is to turn this back to **broker:logDeliverySuccess=false** and redeploying Event Grid module.
-
-* Turn on metrics by setting **metrics:reportertype=console** and redeploy Event Grid module. Any operations after that will result in metrics being logged on the console of Event Grid module, which can be used to debug further. Our recommendation is to turn on metrics only for debugging and once complete to turn it off by setting **metrics:reportertype=none** and redeploying Event Grid module.
-
-## Next steps
-
-Report any issues, suggestions with using Event Grid on IoT Edge at [https://github.com/Azure/event-grid-iot-edge/issues](https://github.com/Azure/event-grid-iot-edge/issues).
event-grid Twin Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/edge/twin-json.md
- Title: Module Twin - Azure Event Grid IoT Edge | Microsoft Docs
-description: Configuration via Module Twin.
--- Previously updated : 02/15/2022---
-# Module twin JSON schema (Azure Event Grid)
-
-Event Grid on IoT Edge integrates with the IoT Edge ecosystem and supports creating topics and subscriptions via the Module Twin. It also reports the current state of all the topics and event subscriptions to the reported properties on the Module Twin.
-
-> [!IMPORTANT]
-> On March 31, 2023, Event Grid on Azure IoT Edge support will be retired, so make sure to transition to IoT Edge native capabilities prior to that date. For more information, see [Transition from Event Grid on Azure IoT Edge to Azure IoT Edge](transition.md).
---
-> [!WARNING]
-> Because of limitations in the IoT Edge ecosystem, all array elements in the following json example have been encoded as json strings. See `EventSubscription.Filter.EventTypes` and `EventSubscription.Filter.AdvancedFilters` keys in the following example.
-
-## Desired properties JSON
-
-* The value of each key-value pair in the topics section has exactly the same JSON schema that's used for `Topic.Properties` on the API when creating topics.
-* The value of each key-value pair in the **EventSubscriptions** section has exactly the same json schema that's used for `EventSubscription.Properties` on the API when creating topics.
-* To delete a topic, set its value to `null` in the desired properties.
-* Deleting event subscriptions via desired properties is not supported.
-
-```json
-{
- "topics": {
- "twinTopic1": {},
- "twinTopic2": {
- "inputSchema": "customEventSchema"
- }
- },
- "eventSubscriptions": {
- "twinTopic1WebhookSub": {
- "topic": "twinTopic1",
- "retryPolicy": {
- "eventExpiryInMinutes": 120,
- "maxDeliveryAttempts": 30
- },
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "https://localhost:4438"
- }
- },
- "filter": {
- "subjectBeginsWith": "^",
- "subjectEndsWith": "$",
- "isSubjectCaseSensitive": false,
- "includedEventTypes": "[\"et1\",\"et2\"]",
- "advancedFilters": "[{\"value\":true,\"operatorType\":\"BoolEquals\",\"key\":\"data.b\"},{\"values\":[\"\\\"\",\"c\"], \"operatorType\":\"StringContains\",\"key\":\"data.s\"}]"
- }
- },
- "twinTopic2EdgeHubSub": {
- "topic": "twinTopic2",
- "deliveryPolicy": {
- "approxBatchSizeInBytes": 200000,
- "maxEventsPerBatch": 25
- },
- "destination": {
- "endpointType": "EdgeHub",
- "properties": {
- "outputName": "twinTopic2EdgeHubSub"
- }
- },
- "filter": {
- "advancedFilters": "[{\"value\":true,\"operatorType\":\"BoolEquals\",\"key\":\"dAt\\\"A.a\"},{\"values\":[\"\\\"\", \"c\"],\"operatorType\":\"StringContains\",\"key\":\"dAt\\\"A.a\"}]"
- }
- }
- }
-}
-```
-
-## Reported properties JSON
-
-The reported properties section of the module twin includes the following information:
-
-* The set of topics and subscriptions that exist in the module's store
-* Any errors encountered when creating desired topics/event subscriptions
-* Any boot up errors (such as desired properties JSON parsing failed)
-
-```json
-{
- "topics": {
- "twinTopic1": {},
- "twinTopic2": {
- "inputSchema": "customEventSchema"
- }
- },
- "eventSubscriptions": {
- "twinTopic1WebhookSub": {
- "topic": "twinTopic1",
- "retryPolicy": {
- "eventExpiryInMinutes": 120,
- "maxDeliveryAttempts": 30
- },
- "destination": {
- "endpointType": "WebHook",
- "properties": {
- "endpointUrl": "https://localhost:4438"
- }
- },
- "filter": {
- "subjectBeginsWith": "^",
- "subjectEndsWith": "$",
- "isSubjectCaseSensitive": false,
- "includedEventTypes": "[\"et1\",\"et2\"]",
- "advancedFilters": "[{\"value\":true,\"operatorType\":\"BoolEquals\",\"key\":\"data.b\"},{\"values\":[\"\\\"\",\"c\"], \"operatorType\":\"StringContains\",\"key\":\"data.s\"}]"
- }
- },
- "twinTopic2EdgeHubSub": {
- "topic": "twinTopic2",
- "deliveryPolicy": {
- "approxBatchSizeInBytes": 200000,
- "maxEventsPerBatch": 25
- },
- "destination": {
- "endpointType": "EdgeHub",
- "properties": {
- "outputName": "twinTopic2EdgeHubSub"
- }
- },
- "filter": {
- "advancedFilters": "[{\"value\":true,\"operatorType\":\"BoolEquals\",\"key\":\"dAt\\\"A.a\"},{\"values\":[\"\\\"\", \"c\"],\"operatorType\":\"StringContains\",\"key\":\"dAt\\\"A.a\"}]"
- }
- }
- },
- "errors": {
- "bootupMessage": "",
- "bootupException": "",
- "topics": {},
- "eventSubscriptions": {
- "twinTopic1EventGridUserTopicSub": "HttpStatusCode='BadRequest' ErrorCode='InvalidDestination' Message='EventSubscription.Properties.Destination failed validation. Reason: EndpointUrl must target the /api/events API of Azure Event Grid in the cloud..'"
- }
- }
-}
-```
event-grid Event Hubs Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-hubs-integration.md
Title: 'Tutorial: Send Event Hubs data to data warehouse - Event Grid'
-description: Describes how to store Event Hubs captured data in Azure Synapse Analytics via Azure Functions and Event Grid triggers.
+description: Shows how to migrate Event Hubs captured data from Azure Blob Storage to Azure Synapse Analytics, specifically a dedicated SQL pool, using Azure Event Grid and Azure Functions.
Last updated 11/14/2022 ms.devlang: csharp
-# Tutorial: Stream big data into a data warehouse
-Azure [Event Grid](overview.md) is an intelligent event routing service that enables you to react to notifications or events from apps and services. For example, it can trigger an Azure function to process Event Hubs data that's captured to a Blob storage or Data Lake Storage. This [sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) shows you how to use Event Grid and Azure Functions to migrate captured Event Hubs data from blob storage to Azure Synapse Analytics, specifically a dedicated SQL pool.
+# Tutorial: Migrate Event Hubs captured data from Azure Storage to Azure Synapse Analytics using Azure Event Grid and Azure Functions
+In this tutorial, you'll migrate Event Hubs captured data from Azure Blob Storage to Azure Synapse Analytics, specifically a dedicated SQL pool, using Azure Event Grid and Azure Functions.
++
+This diagram depicts the workflow of the solution you build in this tutorial:
+
+1. Data sent to an Azure event hub is captured in an Azure blob storage.
+2. When the data capture is complete, an event is generated and sent to Azure Event Grid.
+3. Azure Event Grid forwards this event data to an Azure function app.
+4. The function app uses the blob URL in the event data to retrieve the blob from the storage.
+5. The function app migrates the blob data to an Azure Synapse Analytics.
+
+In this article, you take the following steps:
+
+> [!div class="checklist"]
+> - Deploy the required infrastructure for the tutorial
+> - Publish code to a Functions App
+> - Create an Event Grid subscription
+> - Stream sample data into Event Hubs
+> - Verify captured data in Azure Synapse Analytics
+
+## Prerequisites
+To complete this tutorial, you must have:
+
+- This article assumes that you are familiar with Event Grid and Event Hubs (especially the Capture feature). If you aren't familiar with Azure Event Grid, see [Introduction to Azure Event Grid](overview.md). To learn about the Capture feature of Azure Event Hubs, see [Capture events through Azure Event Hubs in Azure Blob Storage or Azure Data Lake Storage](../event-hubs/event-hubs-capture-overview.md).
+- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Visual studio](https://www.visualstudio.com/vs/) with workloads for: .NET desktop development, Azure development, ASP.NET and web development, Node.js development, and Python development.
+- Download the [EventHubsCaptureEventGridDemo sample project](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) to your computer.
+ - WindTurbineDataGenerator ΓÇô A simple publisher that sends sample wind turbine data to a capture-enabled event hub
+ - FunctionDWDumper ΓÇô An Azure Function that receives a notification from Azure Event Grid when an Avro file is captured to the Azure Storage blob. It receives the blobΓÇÖs URI path, reads its contents, and pushes this data to Azure Synapse Analytics (dedicated SQL pool).
+
+## Deploy the infrastructure
+In this step, you deploy the required infrastructure with a [Resource Manager template](https://github.com/Azure/azure-docs-json-samples/blob/master/event-grid/EventHubsDataMigration.json). When you deploy the template, the following resources are created:
+
+* Event hub with the Capture feature enabled.
+* Storage account for the captured files.
+* App service plan for hosting the function app
+* Function app for processing the event
+* SQL Server for hosting the data warehouse
+* Azure Synapse Analytics (dedicated SQL pool) for storing the migrated data
+
+### Use Azure CLI to deploy the infrastructure
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select **Cloud Shell** button at the top.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/azure-portal.png" alt-text="Screenshot of Azure portal showing the selection of Cloud Shell button.":::
+3. You see the Cloud Shell opened at the bottom of the browser.
+ 1. If you're using the Cloud Shell for the first time:
+ 1. If you see an option to select between **Bash** and **PowerShell**, select **Bash**.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/launch-cloud-shell.png" alt-text="Screenshot of Cloud Shell with Bash selected.":::
+
+ 1. Create a storage account by selecting **Create storage**. Azure Cloud Shell requires an Azure storage account to store some files.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/create-storage-cloud-shell.png" alt-text="Screenshot showing the creation of storage for Cloud Shell.":::
+ 3. Wait until the Cloud Shell is initialized.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/cloud-shell-initialized.png" alt-text="Screenshot showing the Cloud Shell initialized.":::
+4. In the Cloud Shell, select **Bash** as shown in the above image, if it isn't already selected.
+1. Create an Azure resource group by running the following CLI command:
+ 1. Copy and paste the following command into the Cloud Shell window. Change the resource group name and location if you want.
+
+ ```azurecli
+ az group create -l eastus -n rgDataMigration
+ ```
+ 2. Press **ENTER**.
+
+ Here's an example:
+
+ ```azurecli
+ user@Azure:~$ az group create -l eastus -n rgDataMigration
+ {
+ "id": "/subscriptions/00000000-0000-0000-0000-0000000000000/resourceGroups/rgDataMigration",
+ "location": "eastus",
+ "managedBy": null,
+ "name": "rgDataMigration",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null
+ }
+ ```
+2. Deploy all the resources mentioned in the previous section (event hub, storage account, functions app, Azure Synapse Analytics) by running the following CLI command:
+ 1. Copy and paste the command into the Cloud Shell window. Alternatively, you may want to copy/paste into an editor of your choice, set values, and then copy the command to the Cloud Shell.
+
+ > [!IMPORTANT]
+ > Specify values for the following entities before running the command:
+ > - Name of the resource group you created earlier.
+ > - Name for the event hub namespace.
+ > - Name for the event hub. You can leave the value as it is (hubdatamigration).
+ > - Name for the SQL server.
+ > - Name of the SQL user and password.
+ > - Name for the database.
+ > - Name of the storage account.
+ > - Name for the function app.
++
+ ```azurecli
+ az deployment group create \
+ --resource-group rgDataMigration \
+ --template-uri https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/event-grid/EventHubsDataMigration.json \
+ --parameters eventHubNamespaceName=<event-hub-namespace> eventHubName=hubdatamigration sqlServerName=<sql-server-name> sqlServerUserName=<user-name> sqlServerPassword=<password> sqlServerDatabaseName=<database-name> storageName=<unique-storage-name> functionAppName=<app-name>
+ ```
+ 3. Press **ENTER** in the Cloud Shell window to run the command. This process may take a while since you're creating a bunch of resources. In the result of the command, ensure that there have been no failures.
+1. Close the Cloud Shell by selecting the **Cloud Shell** button in the portal (or) **X** button in the top-right corner of the Cloud Shell window.
+
+### Verify that the resources are created
+
+1. In the Azure portal, select **Resource groups** on the left menu.
+2. Filter the list of resource groups by entering the name of your resource group in the search box.
+3. Select your resource group in the list.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/select-resource-group.png" alt-text="Screenshot showing the selection of your resource group.":::
+4. Confirm that you see the following resources in the resource group:
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/resources-in-resource-group.png" alt-text="Screenshot showing resources in the resource group." lightbox="media/event-hubs-functions-synapse-analytics/resources-in-resource-group.png":::
+
+### Create a table in Azure Synapse Analytics
+In this section, you create a table in the dedicated SQL pool you created earlier.
+
+1. In the list of resources in the resource group, select your **dedicated SQL pool**.
+2. On the **Dedicated SQL pool** page, in the **Common Tasks** section on the left menu, select **Query editor (preview)**.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/sql-data-warehouse-page.png" alt-text="Screenshot showing the selection of Query Editor on a Dedicated SQL pool page in the Azure portal.":::
+2. Enter the name of **user** and **password** for the SQL server, and select **OK**. If you see a message about allowing your client to access the SQL server, select **Allowlist IP &lt;your IP Address&gt; on server &lt;your SQL server&gt;**, and then select **OK**.
+1. In the query window, copy and run the following SQL script:
+
+ ```sql
+ CREATE TABLE [dbo].[Fact_WindTurbineMetrics] (
+ [DeviceId] nvarchar(50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
+ [MeasureTime] datetime NULL,
+ [GeneratedPower] float NULL,
+ [WindSpeed] float NULL,
+ [TurbineSpeed] float NULL
+ )
+ WITH (CLUSTERED COLUMNSTORE INDEX, DISTRIBUTION = ROUND_ROBIN);
+ ```
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/run-sql-query.png" alt-text="Screenshot showing the query editor.":::
+5. Keep this tab or window open so that you can verify that the data is created at the end of the tutorial.
+
+## Publish the Azure Functions app
+First, get the publish profile for the Functions app from the Azure portal. Then, use the publish profile to publish the Azure Functions project or app from Visual Studio.
+
+### Get the publish profile
+
+1. On the **Resource Group** page, select the **Azure Functions app** in the list of resources.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/select-function-app.png" alt-text="Screenshot showing the selection of the function app in the list of resources for a resource group.":::
+1. On the **Function App** page for your app, select **Get publish profile** on the command bar.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/get-publish-profile.png" alt-text="Screenshot showing the selection of the **Get Publish Profile** button on the command bar of the function app page.":::
+1. Download and save the file into the **FunctionEGDDumper** subfolder of the **EventHubsCaptureEventGridDemo** folder.
+
+### Use the publish profile to publish the Functions app
+
+1. Launch Visual Studio.
+2. Open the **EventHubsCaptureEventGridDemo.sln** solution that you downloaded from the [GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) as part of the prerequisites. You can find it in the `/samples/e2e/EventHubsCaptureEventGridDemo` folder.
+3. In Solution Explorer, right-click **FunctionEGDWDumper** project, and select **Publish**.
+4. In the following screen, select **Start** or **Add a publish profile**.
+5. In the **Publish** dialog box, select **Import Profile** for **Target**, and select **Next**.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/import-profile.png" alt-text="Screenshot showing the selection **Import Profile** on the **Publish** dialog box.":::
+1. On the **Import profile** tab, select the publish settings file that you saved earlier in the **FunctionEGDWDumper** folder, and then select **Finish**.
+1. When Visual Studio has configured the profile, select **Publish**. Confirm that the publishing succeeded.
+2. In the web browser that has the **Azure Function** page open, select **Functions** on the left menu. Confirm that the **EventGridTriggerMigrateData** function shows up in the list. If you don't see it, try publishing from Visual Studio again, and then refresh the page in the portal.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-function-creation.png" alt-text="Screenshot showing the confirmation of function creation.":::
+
+After publishing the function, you're ready to subscribe to the event.
+
+## Subscribe to the event
+
+1. In a new tab or new window of a web browser, navigate to the [Azure portal](https://portal.azure.com).
+2. In the Azure portal, select **Resource groups** on the left menu.
+3. Filter the list of resource groups by entering the name of your resource group in the search box.
+4. Select your resource group in the list.
+1. Select the **Event Hubs namespace** from the list of resources.
+1. On the **Event Hubs Namespace** page, select **Events** on the left menu, and then select **+ Event Subscription** on the toolbar.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/event-hub-add-subscription-link.png" alt-text="Screenshot of the Events page for an Event Hubs namespace with Add event subscription link selected. ":::
+1. On the **Create Event Subscription** page, follow these steps:
+ 1. Enter a name for the **event subscription**.
+ 1. Enter a name for the **system topic**. A system topic provides an endpoint for the sender to send events. For more information, see [System topics](system-topics.md)
+ 1. For **Endpoint Type**, select **Azure Function**.
+ 1. For **Endpoint**, select the link.
+ 1. On the **Select Azure Function** page, follow these steps if they aren't automatically filled.
+ 1. Select the Azure subscription that has the Azure function.
+ 1. Select the resource group for the function.
+ 1. Select the function app.
+ 1. Select the deployment slot.
+ 1. Select the function **EventGridTriggerMigrateData**.
+ 1. On the **Select Azure Function** page, select **Confirm Selection**.
+ 1. Then, back on the **Create Event Subscription** page, select **Create**.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/event-subscription-select-function.png" alt-text="Screenshot of the Create an event subscription page." lightbox="media/event-hubs-functions-synapse-analytics/event-subscription-select-function.png":::
+1. Verify that the event subscription is created. Switch to the **Event Subscriptions** tab on the **Events** page for the Event Hubs namespace.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/confirm-event-subscription.png" alt-text="Screenshot showing the Event Subscriptions tab on the Events page." lightbox="media/event-hubs-functions-synapse-analytics/confirm-event-subscription.png":::
+1. Select the App Service plan (not the App Service) in the list of resources in the resource group.
+
+## Run the app to generate data
+You've finished setting up your event hub, dedicate SQL pool (formerly SQL Data Warehouse), Azure function app, and event subscription. Before running an application that generates data for event hub, you need to configure a few values.
+
+1. In the Azure portal, navigate to your resource group as you did earlier.
+2. Select the Event Hubs namespace.
+3. In the **Event Hubs Namespace** page, select **Shared access policies** on the left menu.
+4. Select **RootManageSharedAccessKey** in the list of policies.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/event-hub-namespace-shared-access-policies.png" alt-text="Screenshot showing the Shared access policies page for an Event Hubs namespace.":::
+1. Select the copy button next to the **Connection string-primary key** text box.
+1. Go back to your Visual Studio solution.
+1. Right-click **WindTurbineDataGenerator** project, and select **Set as Startup project**.
+1. In the WindTurbineDataGenerator project, open **program.cs**.
+1. Replace `<EVENT HUBS NAMESPACE CONNECTION STRING>` with the connection string you copied from the portal.
+1. If you've used a different name for the event hub other than `hubdatamigration`, replace `<EVENT HUB NAME>` with the name of the event hub.
+
+ ```cs
+ private const string EventHubConnectionString = "Endpoint=sb://demomigrationnamespace.servicebus.windows.net/...";
+ private const string EventHubName = "hubdatamigration";
+ ```
+6. Build the solution. Run the **WindTurbineGenerator.exe** application.
+7. After a couple of minutes, in the other browser tab where you have the query window open, query the table in your data warehouse for the migrated data.
+
+ ```sql
+ select * from [dbo].[Fact_WindTurbineMetrics]
+ ```
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/query-results.png" alt-text="Screenshot showing the query results.":::
+
+## Monitor the solution
+This section helps you with monitoring or troubleshooting the solution.
+
+### View captured data in the storage account
+1. Navigate to the resource group and select the storage account used for capturing event data.
+1. On the **Storage account** page, select **Storage Explorer (preview**) on the left menu.
+1. Expand **BLOB CONTAINERS**, and select **windturbinecapture**.
+1. Open the folder named same as your **Event Hubs namespace** in the right pane.
+1. Open the folder named same as your event hub (**hubdatamigration**).
+1. Drill through the folders and you see the AVRO files. Here's an example:
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/storage-captured-file.png" alt-text="Screenshot showing the captured file in the storage." lightbox="media/event-hubs-functions-synapse-analytics/storage-captured-file.png":::
+
+
+### Verify that the Event Grid trigger invoked the function
+1. Navigate to the resource group and select the function app.
+1. Select **Functions** on the left menu.
+1. Select the **EventGridTriggerMigrateData** function from the list.
+1. On the **Function** page, select **Monitor** on the left menu.
+1. Select **Configure** to configure application insights to capture invocation logs.
+1. Create a new **Application Insights** resource or use an existing resource.
+1. Navigate back to the **Monitor** page for the function.
+1. Confirm that the client application (**WindTurbineDataGenerator**) that's sending the events is still running. If not, run the app.
+1. Wait for a few minutes (5 minutes or more) and select the **Refresh** button to see function invocations.
+
+ :::image type="content" source="media/event-hubs-functions-synapse-analytics/function-invocations.png" alt-text="Screenshot showing the Function invocations.":::
+1. Select an invocation to see details.
+
+ Event Grid distributes event data to the subscribers. The following example shows event data generated when data streaming through an event hub is captured in a blob. In particular, notice the `fileUrl` property in the `data` object points to the blob in the storage. The function app uses this URL to retrieve the blob file with captured data.
+
+ ```json
+ {
+ "topic": "/subscriptions/<AZURE SUBSCRIPTION ID>/resourcegroups/rgDataMigration/providers/Microsoft.EventHub/namespaces/spehubns1207",
+ "subject": "hubdatamigration",
+ "eventType": "Microsoft.EventHub.CaptureFileCreated",
+ "id": "4538f1a5-02d8-4b40-9f20-36301ac976ba",
+ "data": {
+ "fileUrl": "https://spehubstorage1207.blob.core.windows.net/windturbinecapture/spehubns1207/hubdatamigration/0/2020/12/07/21/49/12.avro",
+ "fileType": "AzureBlockBlob",
+ "partitionId": "0",
+ "sizeInBytes": 473444,
+ "eventCount": 2800,
+ "firstSequenceNumber": 55500,
+ "lastSequenceNumber": 58299,
+ "firstEnqueueTime": "2020-12-07T21:49:12.556Z",
+ "lastEnqueueTime": "2020-12-07T21:50:11.534Z"
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2020-12-07T21:50:12.7065524Z"
+ }
+ ```
+
+### Verify that the data is stored in the dedicated SQL pool
+In the browser tab where you have the query window open, query the table in your dedicated SQL pool for the migrated data.
++
+![Screenshot showing the final query results.](media/event-hubs-functions-synapse-analytics/query-results.png)
+ ## Next steps * For more information about setting up and running the sample, see [Event Hubs Capture and Event Grid sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo).
event-grid Transition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/transition.md
+
+ Title: Transition from Event Grid on IoT Edge to Azure IoT Edge native capabilities.
+description: This article explains the transition from Event Grid on Azure IoT Edge to Azure IoT Edge hub module in Azure IoT Edge runtime.
+ Last updated : 03/29/2023+++
+# Transition from Event Grid on IoT Edge to Azure IoT Edge native capabilities
+
+On March 31, 2023, Azure Event Grid on Azure IoT Edge was retired. Update your application to use Azure IoT Edge native capabilities instead.
+
+## Why did we retire it?
+
+There's one major reason to retire Event Grid on IoT Edge. Event Grid has been evolving in the cloud native space to provide more robust capabilities not only in Azure but also in on-premises scenarios with [Kubernetes with Azure Arc](./kubernetes/overview.md).
+
+| Event Grid on IoT Edge | IoT Edge hub |
+| - | -- |
+| - Publish and subscribe to events locally/in the cloud<br/>- Forward events to Event Grid<br/>- Forward events to Azure IoT Hub<br/>- React to Azure Blob Storage events locally | - Connect to IoT Hub<br/>- Route messages between modules or devices locally<br/>- Get offline support<br/>- Filter messages |
+
+## How to transition to IoT Edge features
+
+To use the IoT Edge features, follow these steps:
+
+1. Identify your scenario based on the feature table in the next section.
+2. Follow the documentation to change your architecture and make code changes based on the scenario you want to transition.
+3. Validate your updated architecture by sending and receiving messages/events.
+
+## Event Grid on IoT Edge vs. IoT Edge
+
+The following table highlights the key differences during this transition.
+
+| Event Grid on IoT Edge | IoT Edge |
+| | -- |
+| Publish, subscribe, and forward events locally or to the cloud | Use the message routing feature in the IoT Edge hub to facilitate local and cloud communication. It enables device-to-module, module-to-module, and device-to-device communications by brokering messages to keep devices and modules independent from each other. </br> </br> If you're subscribing to an IoT Edge hub, it's possible to create an event to publish to Event Grid, if needed. For details, see [Azure IoT Hub and Event Grid on IoT Edge](../iot-hub/iot-hub-event-grid.md). |
+| Forward events to IoT Hub | Use the IoT Edge hub to optimize connections when sending messages to the cloud with offline support. For details, see [IoT Edge hub cloud communication](../iot-edge/iot-edge-runtime.md#using-routing). |
+| React to Blob Storage events on IoT Edge (preview) | You can use Azure function apps to react to Blob Storage events on the cloud when a blob is created or updated. For more information, see [Azure Blob Storage trigger for Azure Functions](../azure-functions/functions-bindings-storage-blob-trigger.md) and [Tutorial: Deploy Azure Functions as modules - Azure IoT Edge](../iot-edge/tutorial-deploy-function.md). Blob triggers in an IoT Edge Blob Storage module aren't supported. |
event-grid Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/whats-new.md
This release corresponds to REST API version 2021-06-01-preview, which includes
- [Cloud event V10 schema](cloud-event-schema.md) - [Service Bus topic as destination](handler-service-bus.md) - [Azure function as destination](handler-functions.md)
- - [Webhook batching](./edge/delivery-output-batching.md)
- [Secure webhook (Azure Active Directory support)](secure-webhook-delivery.md) - [Ip filtering](configure-firewall.md) - [Private Link Service support](configure-private-endpoints.md)
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Chennai2** | Airtel | 2 | South India | Supported | Airtel | | **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | Supported | CoreSite, DE-CIX |
-| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | Interxion |
+| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | GlobalConnect, Interxion |
| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Zayo| | **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite, Megaport, PacketFabric, Zayo |
-| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect |
+| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect, Vodafone |
| **Doha2** | [Ooredoo](https://www.ooredoo.qa/portal/OoredooQatar/b2b-data-centre) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect | | **Dubai** | [PCCS](http://www.pacificcontrols.net/cloudservices/) | 3 | UAE North | Supported | Etisalat UAE | | **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom |
The following table shows connectivity locations and the service providers for e
| **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | Supported | | | **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Equinix | | **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | Supported | CenturyLink Cloud Connect, Megaport, Zayo |
-| **Santiago** | [EdgeConnex SCL](https://www.edgeconnex.com/locations/south-america/santiago/) | 3 | n/a | Supported | |
+| **Santiago** | [EdgeConnex SCL](https://www.edgeconnex.com/locations/south-america/santiago/) | 3 | n/a | Supported | PitChile |
| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, InterCloud, Level 3 Communications, Neutrona Networks, Orange, Tata Communications, Telefonica, UOLDIVEO | | **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers, Tivit | | **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Telus, Zayo |
The following table shows connectivity locations and the service providers for e
| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | | | **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada, Cologix, Megaport, Telus, Zayo |
+| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | n/a | Supported | Equinix |
| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/), [Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US, East US 2 | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Lightpath, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo | | **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US, East US 2 | n/a | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom, Zayo |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **du datamena** |Supported |Supported | Dubai2 | | **[eir evo](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin| | **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** |Supported |Supported | Hong Kong2, Singapore, Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, Hong Kong2, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Perth, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Tokyo2, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, Hong Kong2, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Perth, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Tokyo2, Toronto, Washington DC, Warsaw, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported | Taipei |
The following table shows locations by service provider. If you want to view ava
| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** |Supported |Supported | Montreal, Quebec City, Toronto2 | | **[GBI](https://www.gbiinc.com/microsoft-azure/)** |Supported |Supported | Dubai2, Frankfurt | | **[GÉANT](https://www.geant.org/Networks)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, Marseille |
-| **[GlobalConnect](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported |Supported | Oslo, Stavanger |
+| **[GlobalConnect](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported |Supported | Copenhagen, Oslo, Stavanger |
| **[GlobalConnect DK](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported |Supported | Amsterdam | | **GTT** |Supported |Supported | Amsterdam, London2, Washington DC | | **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai |
The following table shows locations by service provider. If you want to view ava
| **[Orixcom](https://www.orixcom.com/cloud-solutions/)** | Supported | Supported | Dubai2 | | **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Las Vegas, London, Los Angeles2, Miami, New York, Silicon Valley, Toronto, Washington DC | | **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago, Hong Kong, Hong Kong2, London, Singapore, Singapore2, Tokyo2 |
+| **PitChile** | Supported | Supported | Santiago |
| **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland | | **[Reliance Jio](https://www.jio.com/business/jio-cloud-connect)** | Supported | Supported | Mumbai | | **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan |
The following table shows locations by service provider. If you want to view ava
| **[Viasat](http://www.directcloud.viasatbusiness.com/)** | Supported | Supported | Washington DC2 | | **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland, Sydney | | **Vodacom** |Supported |Supported | Cape Town, Johannesburg|
-| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported | Amsterdam2, London, Milan, Singapore |
+| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** |Supported |Supported | Amsterdam2, Doha, London, Milan, Singapore |
| **[Vi (Vodafone Idea)](https://www.myvi.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Chennai, Mumbai2 | | **XL Axiata** | Supported | Supported | Jakarta | | **[Zayo](https://www.zayo.com/services/packet/cloudlink/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Dublin, Hong Kong, London, London2, Los Angeles, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Toronto, Vancouver, Washington DC, Washington DC2, Zurich|
governance Remediation Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/remediation-structure.md
+
+ Title: Details of the policy remediation task structure
+description: Describes the policy remediation task definition used by Azure Policy to bring resources into compliance.
Last updated : 11/03/2022++++
+# Azure Policy remediation task structure
+
+The Azure Policy remediation task feature is used to bring resources into compliance established from a definition and assignment. Resources that are non-compliant to a [modify](./effects.md#modify) or [deployIfNotExist](./effects.md#deployifnotexists) definition assignment, can be brought into compliance using a remediation task. Remediation task deploys the deployIFNotExist template or the modify operations to the selected non-compliant resources using the identity specified in the assignment. See [policy assignment structure](./assignment-structure.md#identity). to understand how the identity is define and [remediate non-compliant resources tutorial](../how-to/remediate-resources.md#configure-the-managed-identity) to configure the identity.
+
+> [!NOTE]
+> Remediation tasks remediate exisiting resources that are not compliant. Resources that are newly created or updated that are applicable to a deployIfNotExist or modify definition assignment are automatically remediated.
+
+You use JavaScript Object Notation (JSON) to create a policy remediation task. The policy remediation task contains elements for:
+
+- [display name](#display-name-and-description)
+- [description](#display-name-and-description)
+- [policy assignment](#policy-assignment-id)
+- [policy definitions within an initiative](#policy-definition-id)
+- [resource count and parallel deployments](#resource-count-and-parallel-deployments)
+- [failure threshold](#failure-threshold)
+- [remediation filters](#remediation-filters)
+- [resource discovery mode](#resource-discovery-mode)
+- [provisioning state and deployment summary](#provisioning-state-and-deployment-summary)
++
+For example, the following JSON shows a policy remediation task for policy definition named `requiredTags` a part of
+an initiative assignment named `resourceShouldBeCompliantInit` with all default settings.
+
+```json
+{
+ "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.PolicyInsights/remediations/remediateNotCompliant",
+ "apiVersion": "2021-10-01",
+ "name": "remediateNotCompliant",
+ "type": "Microsoft.PolicyInsights/remediations",
+ "properties": {
+ "policyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit",
+ "policyDefinitionReferenceIds": "requiredTags",
+ "resourceCount": 42,
+ "parallelDeployments": 6,
+ "failureThreshold": {
+ "percentage": 0.1
+ }
+ }
+}
+```
+Steps on how to trigger a remediation task at [how to remediate non-compliant resources guide](../how-to/remediate-resources.md)
+
+> [!NOTE]
+> These settings cannot be changed once the remediation task has started.
++
+## Display name and description
+
+You use **displayName** and **description** to identify the policy remediation task and provide context for
+its use. **displayName** has a maximum length of _128_ characters and
+**description** a maximum length of _512_ characters.
+
+## Policy assignment ID
+
+This field must be the full path name of either a policy assignment or an initiative assignment.
+`policyAssignmentId` is a string and not an array. This property defines which assignment the parent
+resource hierarchy or individual resource to remediate.
+
+## Policy definition ID
+
+If the `policyAssignmentId` is for an initiative assignment, the **policyDefinitionReferenceId** property must be used to specify which policy definition(s) in the initiative the subject resource(s) are to be remediated. As a remediation can only remediation in a scope of one definition,
+this property is a _string_. The value must match the value in the initiative definition in the
+`policyDefinitions.policyDefinitionReferenceId` field.
+
+## Resource count and parallel deployments
+
+Use **resource count** to determine how many non-compliant resources to remediate in a given remediation task. The default value is 500, with the maximum number being 50,000. **Parallel deployments** determines how many of those resources to remediate at the same time. The allowed range is between 1 to 30 with the default value being 10.
+
+> [!NOTE]
+> Parallel deployments are the number of deployments within a singular remediation task with a maxmimum of 30. 100 remediation tasks can be ran simultaneously in the tenant.
+
+## Failure threshold
+
+An optional property used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. The **failure threshold** is represented as a percentage number from 0 to 100. By default, the failure threshold is 100%, meaning that the remediation task will continue to remediate other resources even if resources fail to remediate.
+
+## Remediation filters
+
+An optional property refines what resources are applicable to the remediation task. The allowed filter is resource location. Unless specified, resources from any region can be remediated.
+
+## Resource discovery mode
+
+This property decides how to discover resources that are eligible for remediation. For a resource to be eligible, it must be non-compliant. By default, this property is set to `ExistingNonCompliant`. It could also be set to `ReEvaluateCompliance`, which will trigger a new compliance scan for that assignment and remediate any resources that are found non-compliant.
+
+## Provisioning state and deployment summary
+
+Once a remediation task is created, **provisioning state** and **deployment summary** properties are populated. **Provisioning state** indicates the status of the remediation task. Allow values are `Running`, `Canceled`, `Cancelling`, `Failed`, `Complete`, or `Succeeded`. **Deployment summary** is an array property indicating the number of deployments along with number of successful and failed deployments.
+
+Sample of remediation task that completed successfully:
+
+```json
+{
+ "id": "/subscriptions/{subId}/resourceGroups/ExemptRG/providers/Microsoft.PolicyInsights/remediations/remediateNotCompliant",
+ "Type": "Microsoft.PolicyInsights/remediations",
+ "Name": "remediateNotCompliant",
+ "PolicyAssignmentId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyAssignments/resourceShouldBeCompliantInit",
+ "policyDefinitionReferenceIds": "requiredTags",
+ "resourceCount": 42,
+ "parallelDeployments": 6,
+ "failureThreshold": {
+ "percentage": 0.1
+ },
+ "ProvisioningState": "Succeeded",
+ "DeploymentSummary": {
+ "TotalDeployments": 42,
+ "SuccessfulDeployments": 42,
+ "FailedDeployments": 0
+ },
+}
+```
+
+## Next steps
+
+- Understand how to [determine causes of non-compliance](../how-to/determine-non-compliance.md).
+- Learn how to [get compliance data](../how-to/get-compliance-data.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Understand how to [react to Azure Policy state change events](./event-overview.md).
+- Learn about the [policy definition structure](./definition-structure.md).
+- Learn about the [policy assignment structure](./assignment-structure.md).
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
This step is only applicable when using [Option 1](#option-1-create-a-remediatio
1. If the remediation task is initiated from an initiative assignment, select the policy to remediate from the drop-down. One **deployIfNotExists** or **modify** policy can be remediated through a single Remediation task at a time.
-1. Optionally modify remediation settings on the **New remediation task** page:
-
- - **Failure Threshold percentage** - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.
- - **Resource Count** - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number is 50,000 resources.
- - **Parallel Deployments** - Determines how many resources to remediate at the same time. The allowed values are 1 to 30 resources at a time. The default value is 10.
-
- > [!NOTE]
- > These settings cannot be changed once the remediation task has started.
+1. Optionally modify remediation settings on the page. For information on what each setting controls, see [remediation task structure](../concepts/remediation-structure.md).
1. On the same page, filter the resources to remediate by using the **Scope** ellipses to pick child resources from where the policy is assigned (including down to the
hdinsight Apache Esp Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-esp-kafka-ssl-encryption-authentication.md
description: Set up TLS encryption for communication between Kafka clients and K
Previously updated : 02/17/2023 Last updated : 04/03/2023 # Set up TLS encryption and authentication for ESP Apache Kafka cluster in Azure HDInsight
Run these steps on the client machine.
### Kafka 2.1 or above > [!Note]
-> Below commands will work if you are either using Kafka user or a custom user which have access to do CRUD operation.
+> Below commands will work if you are either using `kafka` user or a custom user which have access to do CRUD operation.
:::image type="content" source="./media/apache-esp-kafka-ssl-encryption-authentication/access-to-crud-operation.png" alt-text="Screenshot showing how to provide access CRUD operations." border="true"::: Using Command Line Tool
-1. Create a topic if it doesn't exist already.
+1. Make sure you check the local kerberos ticket for custom user you want to use to submit commands.
+
+1. `klist`
+
+ If ticket is present, then you are good to proceed. Otherwise generate a Kerberos principle and keytab using below command.
+
+1. `ktutil`
+
+ ```
+ ktutil: addent -password -p espkafkauser@TEST.COM -k 1 -e RC4-HMAC
+ Password for espkafkauser@TEST.COM:
+ ktutil: wkt user1.keytab
+ ktutil: q
+ kinit ΓÇôkt espkafkauser.keytab espkafkauser@TEST.COM
+ ```
+
+1. `klist` again to check kerberos cached ticket.
+1. Create a topic if it doesn't exist already.
```bash sudo su kafka ΓÇôc "/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper <ZOOKEEPER_NODE>:2181 --create --topic topic1 --partitions 2 --replication-factor 2" ```
- To use a keytab, create a JAAS file with the following content. Be sure to point the keyTab property to your keytab file and reference the principal used inside the keytab. Following is a sample JAAS file created and placed in the location in VM: **/home/hdiuser/kafka_client_jaas_keytab.conf**
+ To use a keytab, create a Keytab file with the following content. Be sure to point the Keytab property to your Keytab file and reference the principal used inside the Keytab. Following is a sample JAAS file created and placed in the location in VM: **/home/sshuser/kafka_client_jaas_keytab.conf**
``` KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true
- keyTab="/home/hdiuser/espkafkauser.keytab"
+ keyTab="/home/sshuser/espkafkauser.keytab"
principal="espkafkauser@TEST.COM"; }; ```
Using Command Line Tool
1. Open another ssh connection to client machine and start console consumer and provide the path to `client-ssl-auth.properties` as a configuration file for the consumer. ```bash
- export KAFKA_OPTS="-Djava.security.auth.login.config=/home/hdiuser/kafka_client_jaas_keytab.conf"
+ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/sshuser/kafka_client_jaas_keytab.conf"
/usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server <FQDN_WORKER_NODE>:9093 --topic topic1 --consumer.config ~/ssl/client-ssl-auth.properties --from-beginning ```
-
+
If you want to use Java client to do CRUD operations, then use following GitHub repository. https://github.com/Azure-Samples/hdinsight-kafka-java-get-started/tree/main/DomainJoined-Producer-Consumer-With-TLS
hdinsight Connect Kafka Cluster With Vm In Different Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/connect-kafka-cluster-with-vm-in-different-vnet.md
description: Learn how to connect Apache Kafka cluster with VM in different VNet
Previously updated : 02/16/2023 Last updated : 03/31/2023 # How to connect Kafka cluster with VM in different VNet
This Document lists steps that must be followed to set up connectivity between V
1. Create two different VNets where HDInsight Kafka cluster and VM will be hosted respectively. For more information, see [Create a virtual network using the Azure portal](https://learn.microsoft.com/azure/virtual-network/quick-create-portal) > [!Note]
- > These two VNets must be peered, so that IP addresses of their subnets must not overlap with each other. For more information, see [Create a virtual network using the Azure portal](https://learn.microsoft.com/azure/virtual-network/tutorial-connect-virtual-networks-portal)
+ > These two VNets must be peered, so that IP addresses of their subnets must not overlap with each other. For more information, see [Connect virtual networks with virtual network peering using the Azure portal](https://learn.microsoft.com/azure/virtual-network/tutorial-connect-virtual-networks-portal)
1. Make sure that the peering status shows as connected.
This Document lists steps that must be followed to set up connectivity between V
**Consumer output:**
- :::image type="content" source="./media/connect-kafka-cluster-with-different-vnet/kafka-consumer-output.png" alt-text="Screenshot showing Kafka producer output." border="true":::
+ :::image type="content" source="./media/connect-kafka-cluster-with-different-vnet/kafka-consumer-output.png" alt-text="Screenshot showing Kafka consumer output." border="true":::
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/convert-data.md
$convert-data takes a [Parameter](http://hl7.org/fhir/parameters.html) resource
"parameter": [ { "name": "inputData",
- "valueString": "MSH|^~\\&|SIMHOSP|SFAC|RAPP|RFAC|20200508131015||ADT^A01|517|T|2.3|||AL||44|ASCII\nEVN|A01|20200508131015|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^PRSNL^^^ORGDR|\nPID|1|3735064194^^^SIMULATOR MRN^MRN|3735064194^^^SIMULATOR MRN^MRN~2021051528^^^NHSNBR^NHSNMBR||Kinmonth^Joanna^Chelsea^^Ms^^CURRENT||19870624000000|F|||89 Transaction House^Handmaiden Street^Wembley^^FV75 4GJ^GBR^HOME||020 3614 5541^HOME|||||||||C^White - Other^^^||||||||\nPD1|||FAMILY PRACTICE^^12345|\nPV1|1|I|OtherWard^MainRoom^Bed 183^Simulated Hospital^^BED^Main Building^4|28b|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^PRSNL^^^ORGDR|||CAR|||||||||16094728916771313876^^^^visitid||||||||||||||||||||||ARRIVED|||20200508131015||"
+ "valueString": "MSH|^~\\&|SIMHOSP|SFAC|RAPP|RFAC|20200508131015||ADT^A01|517|T|2.3|||AL||44|ASCII\nEVN|A01|20200508131015|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^D^^^ORGDR|\nPID|1|3735064194^^^SIMULATOR MRN^MRN|3735064194^^^SIMULATOR MRN^MRN~2021051528^^^NHSNBR^NHSNMBR||Kinmonth^Joanna^Chelsea^^Ms^^D||19870624000000|F|||89 Transaction House^Handmaiden Street^Wembley^^FV75 4GJ^GBR^HOME||020 3614 5541^PRN|||||||||C^White - Other^^^||||||||\nPD1|||FAMILY PRACTICE^^12345|\nPV1|1|I|OtherWard^MainRoom^Bed 183^Simulated Hospital^^BED^Main Building^4|28b|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^D^^^ORGDR|||CAR|||||||||16094728916771313876^^^^visitid||||||||||||||||||||||ARRIVED|||20200508131015||"
}, { "name": "inputDataType",
healthcare-apis Convert Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/convert-data.md
A `$convert-data` API call packages the health data for conversion inside a JSON
"parameter": [ { "name": "inputData",
- "valueString": "MSH|^~\\&|SIMHOSP|SFAC|RAPP|RFAC|20200508131015||ADT^A01|517|T|2.3|||AL||44|ASCII\nEVN|A01|20200508131015|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^PRSNL^^^ORGDR|\nPID|1|3735064194^^^SIMULATOR MRN^MRN|3735064194^^^SIMULATOR MRN^MRN~2021051528^^^NHSNBR^NHSNMBR||Kinmonth^Joanna^Chelsea^^Ms^^CURRENT||19870624000000|F|||89 Transaction House^Handmaiden Street^Wembley^^FV75 4GJ^GBR^HOME||020 3614 5541^HOME|||||||||C^White - Other^^^||||||||\nPD1|||FAMILY PRACTICE^^12345|\nPV1|1|I|OtherWard^MainRoom^Bed 183^Simulated Hospital^^BED^Main Building^4|28b|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^PRSNL^^^ORGDR|||CAR|||||||||16094728916771313876^^^^visitid||||||||||||||||||||||ARRIVED|||20200508131015||"
+ "valueString": "MSH|^~\\&|SIMHOSP|SFAC|RAPP|RFAC|20200508131015||ADT^A01|517|T|2.3|||AL||44|ASCII\nEVN|A01|20200508131015|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^D^^^ORGDR|\nPID|1|3735064194^^^SIMULATOR MRN^MRN|3735064194^^^SIMULATOR MRN^MRN~2021051528^^^NHSNBR^NHSNMBR||Kinmonth^Joanna^Chelsea^^Ms^^D||19870624000000|F|||89 Transaction House^Handmaiden Street^Wembley^^FV75 4GJ^GBR^HOME||020 3614 5541^PRN|||||||||C^White - Other^^^||||||||\nPD1|||FAMILY PRACTICE^^12345|\nPV1|1|I|OtherWard^MainRoom^Bed 183^Simulated Hospital^^BED^Main Building^4|28b|||C005^Whittingham^Sylvia^^^Dr^^^DRNBR^D^^^ORGDR|||CAR|||||||||16094728916771313876^^^^visitid||||||||||||||||||||||ARRIVED|||20200508131015||"
}, { "name": "inputDataType",
industry Install Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/install-azure-farmbeats.md
This article describes how to install Azure FarmBeats in your Azure subscription.
-Azure FarmBeats is a business-to-business offering available in Azure Marketplace. It enables aggregation of agriculture data sets across providers and generation of actionable insights. Azure FarmBeats does this by enabling you to build artificial intelligence (AI) or machine learning (ML) models based on fused data sets. The two main components of Azure FarmBeats are:
+Azure FarmBeats is a business-to-business offering available in Azure Marketplace. It enables aggregation of agriculture data sets across providers and generation of actionable insights. Azure FarmBeats does so by enabling you to build artificial intelligence (AI) or machine learning (ML) models based on fused data sets. The two main components of Azure FarmBeats are:
+
+> [!NOTE]
+> Azure FarmBeats is on path to be retired. We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
- **Data hub**: An API layer that enables aggregation, normalization, and contextualization of various agriculture data sets across different providers.
You'll need to complete the following steps before you start the actual installa
You'll need the following permissions in the Azure tenant to install Azure FarmBeats: -- Tenant - AAD app creator
+- Tenant - Azure AD app creator
- Subscription - Owner - Resource Group in which FarmBeats is being installed - Owner
-The first two permissions are needed for [creating the AAD application](#create-an-aad-application) step. If needed, you can get someone with the appropriate permissions to create the AAD application.
+The first two permissions are needed for [creating the Azure AD application](#create-an-aad-application) step. If needed, you can get someone with the appropriate permissions to create the Azure AD application.
The person running the FarmBeats install from marketplace needs to be an owner of the Resource Group in which FarmBeats is being installed. For subscription owners, this happens automatically when Resource Group is created. For others, please pre-create the Resource Group and ask the Subscription owner to make you an owner of the Resource Group.
Make a note of the **Azure Subscription ID** and the **Azure Region**.
### Create an AAD application
-Azure FarmBeats require Azure Active Directory application creation and registration. To successfully run the AAD creation script, the following permissions are needed:
+Azure FarmBeats require Azure Active Directory application creation and registration. To successfully run the Azure AD creation script, the following permissions are needed:
-- Tenant - AAD app creator
+- Tenant - Azure AD app creator
- Subscription - Owner Run the following steps in a Cloud Shell instance using the PowerShell environment. First-time users will be prompted to select a subscription and create a storage account. Complete the setup as instructed.
-1. Download the AAD app generator script
+1. Download the Azure AD app generator script
```azurepowershell-interactive wget -q https://aka.ms/FarmBeatsAADScript -O ./create_aad_script.ps1
Run the following steps in a Cloud Shell instance using the PowerShell environme
cd ```
-3. Run the AAD script
+3. Run the Azure AD script
```azurepowershell-interactive ./create_aad_script.ps1
Run the following steps in a Cloud Shell instance using the PowerShell environme
4. The script asks for the following three inputs:
- - **FarmBeats Website Name**: This is the unique URL prefix for your FarmBeats web application. In case the prefix is already taken, the script will error out. Once installed, your FarmBeats deployment will be accessible from https://\<FarmBeats-website-name>.azurewebsites.net and the swagger APIs will be at https://\<FarmBeats-website-name>-api.azurewebsites.net
+ - **FarmBeats Website Name** is the unique URL prefix for your FarmBeats web application. In case the prefix is already taken, the script will error out. Once installed, your FarmBeats deployment will be accessible from https://\<FarmBeats-website-name>.azurewebsites.net and the swagger APIs will be at https://\<FarmBeats-website-name>-api.azurewebsites.net
- **Azure login ID**: Provide Azure login ID for the user who you want to be added as admin of FarmBeats. This user can then grant access to access FarmBeats web application to other users. The login ID is generally of the form john.doe@domain.com. Azure UPN is also supported. - **Subscription ID**: This is the ID of the subscription in which you want to install Azure FarmBeats
-5. The AAD script takes around 2 minutes to run and outputs values on screen as well as to a json file in the same directory. If you had someone else run the script, ask them to share this output with you.
+5. The Azure AD script takes around 2 minutes to run and outputs values on screen as well as to a json file in the same directory. If you had someone else run the script, ask them to share this output with you.
### Create Sentinel account
You're now ready to install FarmBeats. Follow the steps below to start the insta
![Basics Tab](./media/install-azure-farmbeats/create-azure-farmbeats-basics.png)
-6. Copy the individual entries from the output of [AAD script](#create-an-aad-application) to the inputs in the AAD application section.
+6. Copy the individual entries from the output of [Azure AD script](#create-an-aad-application) to the inputs in the Azure AD application section.
7. Enter the [Sentinel account](#create-sentinel-account) user name and password in the Sentinel Account section. Select **Next** to move to the **Review + Create** tab.
industry Overview Azure Farmbeats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/overview-azure-farmbeats.md
Azure FarmBeats is a business-to-business offering available in Azure Marketplace. It enables aggregation of agriculture data sets across providers. Azure FarmBeats enables you to build artificial intelligence (AI) or machine learning (ML) models based on fused data sets. By using Azure FarmBeats, agriculture businesses can focus on core value-adds instead of the undifferentiated heavy lifting of data engineering. > [!NOTE]
-> We have built a new PaaS version of Azure FarmBeats as a fully managed service and currently in private preview. For more information on trying out the new Azure FarmBeats, write to us at FarmBeatsSupport@microsoft.com.
+> Azure FarmBeats is on path to be retired. We have built a new agriculture focused service, it's name is Azure Data Manager for Agriculture and it's now available as a preview service. For more information see public documentation [**here**](../../data-manager-for-agri/overview-azure-data-manager-for-agriculture.md) or write to us at madma@microsoft.com.
![Project Farm Beats](./media/architecture-for-farmbeats/farmbeats-architecture-1.png)
With the preview of Azure FarmBeats you can:
- Build or augment your digital agriculture solution by providing farm health advisories. > [!NOTE]
-> Azure FarmBeats is currently in public preview. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Azure FarmBeats is provided without a service level agreement. Use the [Azure FarmBeats Support Forum](/answers/topics/azure-farmbeats.html) for support.
+> Azure FarmBeats is currently in public preview. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Azure FarmBeats is provided without a service level agreement.
-## Datahub
+## Data hub
-The Azure FarmBeats Datahub is an API layer, which enables aggregation, normalization, and contextualization of various agriculture datasets across providers. You can use Azure FarmBeats to get:
+The Azure FarmBeats Data hub is an API layer, which enables aggregation, normalization, and contextualization of various agriculture datasets across providers. You can use Azure FarmBeats to get:
- **Sensor data** from two sensor providers [Davis Instruments](https://www.davisinstruments.com/products/enviromonitor-gateway-us-lte), [Teralytic](https://teralytic.com/), [Pessl Instruments](https://metos.at/) - **Satellite imagery** from European Space Agency's [Sentinel-2](https://sentinel.esa.int/web/sentinel/home) satellite mission - **Drone imagery** from three drone imagery providers [senseFly](https://www.sensefly.com/) , [SlantRange](https://slantrange.com/) , [DJI](https://dji.com/)
-Datahub is designed as an extensible API platform. We are working with many more providers to integrate with Azure FarmBeats, so you have more choice while building your solution.
+Data hub is designed as an extensible API platform. We're working with many more providers to integrate with Azure FarmBeats, so you have more choice while building your solution.
## Accelerator
-The Azure FarmBeats Accelerator is a sample web application, that is built on top of Datahub. The Accelerator jump-starts your user interface and model development. The Azure FarmBeats accelerator uses Azure FarmBeats' APIs. It visualizes ingested sensor data as charts and model outputs as maps. For example, you can use the accelerator to create a farm quickly and get a vegetation index map or a sensor placement map for that farm easily.
+The Azure FarmBeats Accelerator is a sample web application, that is built on top of Data hub. The Accelerator jump-starts your user interface and model development. The Azure FarmBeats accelerator uses Azure FarmBeats' APIs. It visualizes ingested sensor data as charts and model outputs as maps. For example, you can use the accelerator to create a farm quickly and get a vegetation index map or a sensor placement map for that farm easily.
## Azure role-based access control (Azure RBAC)
An administrator can add multiple partners as data providers to Azure FarmBeats.
## Resources
-Azure FarmBeats is offered at no additional charge and you pay only for the Azure resources you use. You can use the below resources to know more about the offering:
+Azure FarmBeats is offered at no extra charge and you pay only for the Azure resources you use. You can use the below resources to know more about the offering:
- Stay informed about the latest Azure FarmBeats news by visiting our [Azure FarmBeats blog](https://aka.ms/farmbeatsblog). - Seek help by posting a question on our [Azure FarmBeats support forum](/answers/topics/azure-farmbeats.html).
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
If the device gets any of the following errors when it connects, it should use a
To learn more about device error codes, see [Troubleshooting device connections](troubleshoot-connection.md).
+To learn more about implementing automatic reconnections, see [Manage device reconnections to create resilient applications](../../iot-develop/concepts-manage-device-reconnections.md).
+ ### Test failover capabilities The Azure CLI lets you test the failover capabilities of your device code. The CLI command works by temporarily switching a device registration to a different internal IoT hub. To verify the device failover worked, check that the device still sends telemetry and responds to commands.
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
- Title: Export data from Azure IoT Central (legacy) | Microsoft Docs
-description: How to export data from your Azure IoT Central application to Azure Event Hubs, Azure Service Bus, and Azure Blob storage
--- Previously updated : 06/20/2022---
-# Export IoT data to cloud destinations using data export (legacy)
-
-The legacy data export (classic) feature is now deprecated and you should plan to migrate to the new [data export feature](howto-export-to-blob-storage.md). The legacy data export lacks important capabilities such as the availability of different data types, filtering, and message transformation. See the following table for a comparison of legacy data export with new data export:
-
-| Capability | Legacy data export (classic) | New data export |
-| :- | :- | :-- |
-| Available data types | Telemetry, devices, device templates | Telemetry, property changes, device connectivity changes, device lifecycle changes, device template lifecycle changes |
-| Filtering | None | Depends on the data type exported. For telemetry, filtering by telemetry, message properties, property values |
-| Enrichments | None | Enrich with a custom string or a property value on the device |
-| Transforms| None | Transform the export message to your desired shape |
-| Destinations | Azure Event Hubs, Azure Service Bus queues and topics, Azure Blob Storage | Same as for legacy data export plus Azure Data Explorer and webhooks|
-| Notable limits | Five exports per app, one destination per export | 10 export-destination connections per app |
-
-## Migration considerations
-
-To migrate a legacy data export (classic) to new data export, you should:
-
-1. Use a test IoT Central application and create a new data export with the same data type and destination. You can optionally use the enrichments and data transformation functionality to make your export message shape similar to the message shape from your legacy data export.
-1. When you've tested your new data export and are ready to go to production, ensure any workflows or dependencies on your active legacy data exports are safely removed.
-1. Create your new data exports in your production environments and verify that the export messages are meeting your requirements. You can then add any workflows or dependencies to your new data export.
-1. After you've successfully migrated all your legacy data exports to new data exports, you can delete the legacy data exports.
-
-### Data type migration considerations
-
-The default data format varies for data types between legacy data export and new data export. For more information, see [data formats for new data export](howto-export-data.md#data-formats) and [data formats for legacy data export](#export-contents-and-format). When you migrate to the new data export, you should remove any dependencies on data format of your legacy data export. However, if you have strong dependencies or workflows tied to your legacy data exports then the following considerations can help address any migration challenges.
-
-Telemetry: If you choose to match the legacy data export format for your telemetry in your new data export, you can use the transform functionality and build a transformation query similar to the following example:
-
-```jq
-.telemetry | map({ key: .name, value: .value }) | from_entries
-```
-
-Devices: If you're currently using legacy data exports with the devices data type then you can use both the property changes and device lifecycle events data types in new export to export the same data. You can achieve a comparable data structure using the following transformation query on both data types:
-
-```jq
-approved: .device.approved,
-provisioned: .device.provisioned,
-simulated: .device.simulated,
-cloudProperties: .device.cloudProperties | map({ key: .name, value: .value }) | from_entries,
-displayName: .device.name,
-id: .device.id,
-instanceOf: .device.templateId,
-properties: .device.properties.reported | map({ key: .name, value: .value }) | from_entries
-```
-
-Device templates: If you're currently using legacy data exports with the device templates data type, then you can obtain the same data using the [Device Templates - Get API call](/rest/api/iotcentral/2022-07-31dataplane/device-templates/get).
-
-### Destination migration considerations
-
-In the new data export, you can create a destination and reuse it across different data exports. When you migrate from legacy data exports, you should create destinations in the new data exports that store information on your existing legacy data export destinations.
-
-> [!Note]
-> The new data export doesn't support exporting non-valid JSON messages.
-
-## Export IoT data to cloud destinations (legacy)
-
-> [!Note]
-> This article describes the legacy data export features in IoT Central
->
-> - Legacy data exports (classic) are scheduled to be retired. Migrate any legacy data exports to new exports
->
-> - For information about the latest data export features, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
--
-This article describes how to use the data export feature in Azure IoT Central. This feature lets you export your data continuously to **Azure Event Hubs**, **Azure Service Bus**, or **Azure Blob storage** instances. Data export uses the JSON format and can include telemetry, device information, and device template information. Use the exported data for:
--- Warm-path insights and analytics. This option includes triggering custom rules in Azure Stream Analytics, triggering custom workflows in Azure Logic Apps, or passing it through Azure Functions to be transformed.-- Cold-path analytics such as training models in Azure Machine Learning or long-term trend analysis in Microsoft Power BI.-
-> [!Note]
-> When you turn on data export, you get only the data from that moment onward. Currently, data can't be retrieved for a time when data export was off. To retain more historical data, turn on data export early.
-
-## Prerequisites
-
-You must be an administrator in your IoT Central application, or have Data export permissions.
-
-## Set up export destination
-
-Your export destination must exist before you configure your data export.
-
-### Create Event Hubs namespace
-
-If you don't have an existing Event Hubs namespace to export to, follow these steps:
-
-1. Create a [new Event Hubs namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.EventHub). You can learn more in [Azure Event Hubs docs](../../event-hubs/event-hubs-create.md).
-
-2. Choose a subscription. You can export data to other subscriptions that aren't in the same subscription as your IoT Central application. You connect using a connection string in this case.
-
-3. Create an event hub in your Event Hubs namespace. Go to your namespace, and select **+ Event Hub** at the top to create an event hub instance.
-
-### Create Service Bus namespace
-
-If you don't have an existing Service Bus namespace to export to, follow these steps:
-
-1. Create a [new Service Bus namespace in the Azure portal](https://portal.azure.com/#create/Microsoft.ServiceBus.1.0.5). You can learn more in [Azure Service Bus docs](../../service-bus-messaging/service-bus-create-namespace-portal.md).
-2. Choose a subscription. You can export data to other subscriptions that aren't in the same subscription as your IoT Central application. You connect using a connection string in this case.
-
-3. To create a queue or topic to export to, go to your Service Bus namespace, and select **+ Queue** or **+ Topic**.
-
-When you choose Service Bus as an export destination, the queues and topics must not have Sessions or Duplicate Detection enabled. If either of those options are enabled, some messages won't arrive in your queue or topic.
-
-### Create storage account
-
-If you don't have an existing Azure storage account to export to, follow these steps:
-
-1. Create a [new storage account in the Azure portal](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You can learn more about creating new [Azure Blob storage accounts](../../storage/blobs/storage-quickstart-blobs-portal.md) or [Azure Data Lake Storage v2 storage accounts](../../storage/common/storage-account-create.md). Data export can only write data to storage accounts that support block blobs. The following list shows the known compatible storage account types:
-
- |Performance Tier|Account Type|
- |-|-|
- |Standard|General Purpose V2|
- |Standard|General Purpose V1|
- |Standard|Blob storage|
- |Premium|Block Blob storage|
-
-2. Create a container in your storage account. Go to your storage account. Under **Blob Service**, select **Browse Blobs**. Select **+ Container** at the top to create a new container.
-
-## Set up data export
-
-Now that you have a destination to export data to, follow these steps to set up data export.
-
-1. Sign in to your IoT Central application.
-
-2. In the left pane, select **Data export**.
-
- > [!Tip]
- > If you don't see **Data export** in the left pane, then you don't have permissions to configure data export in your app. Talk to an administrator to set up data export.
-
-3. Select the **+ New** button. Choose one of **Azure Blob Storage**, **Azure Event Hubs**, **Azure Service Bus Queue**, or **Azure Service Bus Topic** as the destination of your export. The maximum number of exports per application is five.
-
-4. Enter a name for the export. In the drop-down list box, select your **namespace**, or **Enter a connection string**.
-
- > [!Tip]
- > You only see storage accounts, Event Hubs namespaces, and Service Bus namespaces in the same subscription as your IoT Central application. If you want to export to a destination outside of this subscription, choose **Enter a connection string** and see step 6.
-
- ![Create new Event Hub](media/howto-export-data-legacy/export-event-hub.png)
-
-5. Choose an event hub, queue, topic, or container from the drop-down list box.
-
-6. (Optional) If you chose **Enter a connection string**, a new box appears for you to paste your connection string. To get the connection string for your:
-
- - Event Hubs or Service Bus, go to the namespace in the Azure portal:
- - To use a connection string for the entire namespace:
- 1. Under **Settings**, select **Shared Access Policies**
- 2. Create a new key or choose an existing key that has **Send** permissions.
- 3. Copy either the primary or secondary connection string
- - To use connection string for a specific event hub instance or Service Bus queue or topic, go to **Entities > Event Hubs** or **Entities > Queues** or **Entities > Topics**. Choose a specific instance, and follow the same steps above to get a connection string.
- - Storage account, go to the storage account in the Azure portal:
- - Only connection strings for the entire storage account are supported. Connection strings scoped to a single container aren't supported.
- 1. Under **Settings**, select **Access keys**
- 2. Copy either the key1 connection string or the key2 connection string
-
- Paste in the connection string. Type in the instance or case-sensitive **container name**.
-
-7. Under **Data to export**, choose the types of data to export by setting the type to **On**.
-
-8. To turn on data export, make sure the **Enabled** toggle is **On**. Select **Save**.
-
-9. After a few minutes, your data appears in your chosen destination.
-
-## Export contents and format
-
-Exported telemetry data contains the entirety of the message your devices sent to IoT Central, not just the telemetry values themselves. Exported devices data contains changes to properties and metadata of all devices, and exported device templates contains changes to all device templates.
-
-For Event Hubs and Service Bus, data is exported in near-realtime. The data is in the `body` property and is in JSON format. See below for examples.
-
-For Blob storage, data is exported once per minute, with each file containing the batch of changes since the last exported file. Exported data is placed in three folders in JSON format. The default paths in your storage account are:
--- Telemetry: _{container}/{app-id}/telemetry/{YYYY}/{MM}/{dd}/{hh}/{mm}/{filename}_-- Devices: _{container}/{app-id}/devices/{YYYY}/{MM}/{dd}/{hh}/{mm}/{filename}_-- Device templates: _{container}/{app-id}/deviceTemplates/{YYYY}/{MM}/{dd}/{hh}/{mm}/{filename}_-
-To browse the exported files in the Azure portal, navigate to the file and select the **Edit blob** tab.
-
-## Telemetry
-
-For Event Hubs and Service Bus, IoT Central exports a new message quickly after it receives the message from a device. Each exported message contains the full message the device sent in the body property in JSON format.
-
-For Blob storage, messages are batched and exported once per minute. The exported files use the same format as the message files exported by [IoT Hub message routing](../../iot-hub/tutorial-routing.md) to blob storage.
-
-> [!NOTE]
-> For Blob storage, ensure that your devices are sending messages that have `contentType: application/JSON` and `contentEncoding:utf-8` (or `utf-16`, `utf-32`). See the [IoT Hub documentation](../../iot-hub/iot-hub-devguide-routing-query-syntax.md#query-based-on-message-body) for an example.
-
-The device that sent the telemetry is represented by the device ID (see the following sections). To get the names of the devices, export device data and correlate each message by using the **connectionDeviceId** that matches the **deviceId** of the device message.
-
-The following example shows a message received from an event hub or Service Bus queue or topic:
-
-```json
-{
- "temp":81.129693132351775,
- "humid":59.488071477541247,
- "EventProcessedUtcTime":"2020-04-07T09:41:15.2877981Z",
- "PartitionId":0,
- "EventEnqueuedUtcTime":"2020-04-07T09:38:32.7380000Z"
-}
-```
-
-This message doesn't include the device ID of the sending device.
-
-To retrieve the device ID from the message data in an Azure Stream Analytics query, use the [GetMetadataPropertyValue](/stream-analytics-query/getmetadatapropertyvalue) function. For an example, see the query in [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](./howto-create-custom-rules.md).
-
-To retrieve the device ID in an Azure Databricks or Apache Spark workspace, use [systemProperties](https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/structured-streaming-eventhubs-integration.md). For an example, see the Databricks workspace in [Extend Azure IoT Central with custom analytics using Azure Databricks](./howto-create-custom-analytics.md).
-
-The following example shows a record exported to blob storage:
-
-```json
-{
- "EnqueuedTimeUtc":"2019-09-26T17:46:09.8870000Z",
- "Properties":{
-
- },
- "SystemProperties":{
- "connectionDeviceId":"<deviceid>",
- "connectionAuthMethod":"{\"scope\":\"device\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}",
- "connectionDeviceGenerationId":"637051167384630591",
- "contentType":"application/json",
- "contentEncoding":"utf-8",
- "enqueuedTime":"2019-09-26T17:46:09.8870000Z"
- },
- "Body":{
- "temp":49.91322758395974,
- "humid":49.61214852573155,
- "pm25":25.87332214661367
- }
-}
-```
-
-## Devices
-
-Each message or record in a snapshot represents one or more changes to a device and its device and cloud properties since the last exported message. The message includes the:
--- `id` of the device in IoT Central-- `displayName` of the device-- Device template ID in `instanceOf`-- `simulated` flag, true if the device is a simulated device-- `provisioned` flag, true if the device has been provisioned-- `approved` flag, true if the device has been approved to send data-- Property values-- `properties` including device and cloud properties values-
-Deleted devices aren't exported. Currently, there are no indicators in exported messages for deleted devices.
-
-For Event Hubs and Service Bus, IoT Central sends messages containing device data to your event hub or Service Bus queue or topic in near real time.
-
-For Blob storage, a new snapshot containing all the changes since the last one written is exported once per minute.
-
-The following example message shows information about devices and properties data in an event hub or Service Bus queue or topic:
-
-```json
-{
- "body":{
- "id": "<device Id>",
- "etag": "<etag>",
- "displayName": "Sensor 1",
- "instanceOf": "<device template Id>",
- "simulated": false,
- "provisioned": true,
- "approved": true,
- "properties": {
- "sensorComponent": {
- "setTemp": "30",
- "fwVersion": "2.0.1",
- "status": { "first": "first", "second": "second" },
- "$metadata": {
- "setTemp": {
- "desiredValue": "30",
- "desiredVersion": 3,
- "desiredTimestamp": "2020-02-01T17:15:08.9284049Z",
- "ackVersion": 3
- },
- "fwVersion": { "ackVersion": 3 },
- "status": {
- "desiredValue": {
- "first": "first",
- "second": "second"
- },
- "desiredVersion": 2,
- "desiredTimestamp": "2020-02-01T17:15:08.9284049Z",
- "ackVersion": 2
- }
- },
-
- }
- },
- "installDate": { "installDate": "2020-02-01" }
-},
- "annotations":{
- "iotcentral-message-source":"devices",
- "x-opt-partition-key":"<partitionKey>",
- "x-opt-sequence-number":39740,
- "x-opt-offset":"<offset>",
- "x-opt-enqueued-time":1539274959654
- },
- "partitionKey":"<partitionKey>",
- "sequenceNumber":39740,
- "enqueuedTimeUtc":"2020-02-01T18:14:49.3820326Z",
- "offset":"<offset>"
-}
-```
-
-This snapshot is an example message that shows devices and properties data in Blob storage. Exported files contain a single line per record.
-
-```json
-{
- "id": "<device Id>",
- "etag": "<etag>",
- "displayName": "Sensor 1",
- "instanceOf": "<device template Id>",
- "simulated": false,
- "provisioned": true,
- "approved": true,
- "properties": {
- "sensorComponent": {
- "setTemp": "30",
- "fwVersion": "2.0.1",
- "status": { "first": "first", "second": "second" },
- "$metadata": {
- "setTemp": {
- "desiredValue": "30",
- "desiredVersion": 3,
- "desiredTimestamp": "2020-02-01T17:15:08.9284049Z",
- "ackVersion": 3
- },
- "fwVersion": { "ackVersion": 3 },
- "status": {
- "desiredValue": {
- "first": "first",
- "second": "second"
- },
- "desiredVersion": 2,
- "desiredTimestamp": "2020-02-01T17:15:08.9284049Z",
- "ackVersion": 2
- }
- },
-
- }
- },
- "installDate": { "installDate": "2020-02-01" }
-}
-```
-
-## Device templates
-
-Each message or snapshot record represents one or more changes to a published device template since the last exported message. Information sent in each message or record includes:
--- `id` of the device template that matches the `instanceOf` of the devices stream above-- `displayName` of the device template-- The device `capabilityModel` including its `interfaces`, and the telemetry, properties, and commands definitions-- `cloudProperties` definitions-- Overrides and initial values, inline with the `capabilityModel`-
-Deleted device templates aren't exported. Currently, there are no indicators in exported messages for deleted device templates.
-
-For Event Hubs and Service Bus, IoT Central sends messages containing device template data to your event hub or Service Bus queue or topic in near real time.
-
-For Blob storage, a new snapshot containing all the changes since the last one written is exported once per minute.
-
-This example shows a message about device templates data in event hub or Service Bus queue or topic:
-
-```json
-{
- "body":{
- "id": "<device template id>",
- "etag": "<etag>",
- "types": ["DeviceModel"],
- "displayName": "Sensor template",
- "capabilityModel": {
- "@id": "<capability model id>",
- "@type": ["CapabilityModel"],
- "contents": [],
- "implements": [
- {
- "@id": "<component Id>",
- "@type": ["InterfaceInstance"],
- "name": "sensorComponent",
- "schema": {
- "@id": "<interface Id>",
- "@type": ["Interface"],
- "displayName": "Sensor interface",
- "contents": [
- {
- "@id": "<id>",
- "@type": ["Telemetry"],
- "displayName": "Humidity",
- "name": "humidity",
- "schema": "double"
- },
- {
- "@id": "<id>",
- "@type": ["Telemetry", "SemanticType/Event"],
- "displayName": "Error event",
- "name": "error",
- "schema": "integer"
- },
- {
- "@id": "<id>",
- "@type": ["Property"],
- "displayName": "Set temperature",
- "name": "setTemp",
- "writable": true,
- "schema": "integer",
- "unit": "Units/Temperature/fahrenheit",
- "initialValue": "30"
- },
- {
- "@id": "<id>",
- "@type": ["Property"],
- "displayName": "Firmware version read only",
- "name": "fwversion",
- "schema": "string"
- },
- {
- "@id": "<id>",
- "@type": ["Property"],
- "displayName": "Display status",
- "name": "status",
- "writable": true,
- "schema": {
- "@id": "urn:testInterface:status:obj:ka8iw8wka:1",
- "@type": ["Object"]
- }
- },
- {
- "@id": "<id>",
- "@type": ["Command"],
- "request": {
- "@id": "<id>",
- "@type": ["SchemaField"],
- "displayName": "Configuration",
- "name": "config",
- "schema": "string"
- },
- "response": {
- "@id": "<id>",
- "@type": ["SchemaField"],
- "displayName": "Response",
- "name": "response",
- "schema": "string"
- },
- "displayName": "Configure sensor",
- "name": "sensorConfig"
- }
- ]
- }
- }
- ],
- "displayName": "Sensor capability model"
- },
- "solutionModel": {
- "@id": "<id>",
- "@type": ["SolutionModel"],
- "cloudProperties": [
- {
- "@id": "<id>",
- "@type": ["CloudProperty"],
- "displayName": "Install date",
- "name": "installDate",
- "schema": "dateTime",
- "valueDetail": {
- "@id": "<id>",
- "@type": ["ValueDetail/DateTimeValueDetail"]
- }
- }
- ]
- }
- },
- "annotations":{
- "iotcentral-message-source":"deviceTemplates",
- "x-opt-partition-key":"<partitionKey>",
- "x-opt-sequence-number":25315,
- "x-opt-offset":"<offset>",
- "x-opt-enqueued-time":1539274985085
- },
- "partitionKey":"<partitionKey>",
- "sequenceNumber":25315,
- "enqueuedTimeUtc":"2019-10-02T16:23:05.085Z",
- "offset":"<offset>"
- }
-}
-```
-
-This example snapshot shows a message that contains device and properties data in Blob storage. Exported files contain a single line per record.
-
-```json
-{
- "id": "<device template id>",
- "etag": "<etag>",
- "types": ["DeviceModel"],
- "displayName": "Sensor template",
- "capabilityModel": {
- "@id": "<capability model id>",
- "@type": ["CapabilityModel"],
- "contents": [],
- "implements": [
- {
- "@id": "<component Id>",
- "@type": ["InterfaceInstance"],
- "name": "Sensor component",
- "schema": {
- "@id": "<interface Id>",
- "@type": ["Interface"],
- "displayName": "Sensor interface",
- "contents": [
- {
- "@id": "<id>",
- "@type": ["Telemetry"],
- "displayName": "Humidity",
- "name": "humidity",
- "schema": "double"
- },
- {
- "@id": "<id>",
- "@type": ["Telemetry", "SemanticType/Event"],
- "displayName": "Error event",
- "name": "error",
- "schema": "integer"
- },
- {
- "@id": "<id>",
- "@type": ["Property"],
- "displayName": "Set temperature",
- "name": "setTemp",
- "writable": true,
- "schema": "integer",
- "unit": "Units/Temperature/fahrenheit",
- "initialValue": "30"
- },
- {
- "@id": "<id>",
- "@type": ["Property"],
- "displayName": "Firmware version read only",
- "name": "fwversion",
- "schema": "string"
- },
- {
- "@id": "<id>",
- "@type": ["Property"],
- "displayName": "Display status",
- "name": "status",
- "writable": true,
- "schema": {
- "@id": "urn:testInterface:status:obj:ka8iw8wka:1",
- "@type": ["Object"]
- }
- },
- {
- "@id": "<id>",
- "@type": ["Command"],
- "request": {
- "@id": "<id>",
- "@type": ["SchemaField"],
- "displayName": "Configuration",
- "name": "config",
- "schema": "string"
- },
- "response": {
- "@id": "<id>",
- "@type": ["SchemaField"],
- "displayName": "Response",
- "name": "response",
- "schema": "string"
- },
- "displayName": "Configure sensor",
- "name": "sensorconfig"
- }
- ]
- }
- }
- ],
- "displayName": "Sensor capability model"
- },
- "solutionModel": {
- "@id": "<id>",
- "@type": ["SolutionModel"],
- "cloudProperties": [
- {
- "@id": "<id>",
- "@type": ["CloudProperty"],
- "displayName": "Install date",
- "name": "installDate",
- "schema": "dateTime",
- "valueDetail": {
- "@id": "<id>",
- "@type": ["ValueDetail/DateTimeValueDetail"]
- }
- }
- ]
- }
- }
-```
-
-## Data format change notice
-
-> [!Note]
-> The telemetry stream data format is unaffected by this change. Only the devices and device templates streams of data are affected.
-
-If you have an existing data export in your preview application with the *Devices* and *Device templates* streams turned on, update your export by **30 June 2020**. This requirement applies to exports to Azure Blob storage, Azure Event Hubs, and Azure Service Bus.
-
-Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/2022-07-31dataplane/devices/get), [device property](/rest/api/iotcentral/2022-07-31dataplane/devices/get-properties), and [device template](/rest/api/iotcentral/2022-07-31dataplane/device-templates/get) objects in the IoT Central public API.
-
-For **Devices**, notable differences between the old data format and the new data format include:
-- `@id` for device is removed, `deviceId` is renamed to `id` -- `provisioned` flag is added to describe the provisioning status of the device-- `approved` flag is added to describe the approval state of the device-- `properties` including device and cloud properties, matches entities in the public API-
-For **Device templates**, notable differences between the old data format and the new data format include:
--- `@id` for device template is renamed to `id`-- `@type` for the device template is renamed to `types`, and is now an array-
-### Devices (format deprecated as of 3 February 2020)
-
-```json
-{
- "@id":"<id-value>",
- "@type":"Device",
- "displayName":"Airbox",
- "data":{
- "$cloudProperties":{
- "Color":"blue"
- },
- "EnvironmentalSensor":{
- "thsensormodel":{
- "reported":{
- "value":"Neque quia et voluptatem veritatis assumenda consequuntur quod.",
- "$lastUpdatedTimestamp":"2019-09-30T20:35:43.8478978Z"
- }
- },
- "pm25sensormodel":{
- "reported":{
- "value":"Aut alias odio.",
- "$lastUpdatedTimestamp":"2019-09-30T20:35:43.8478978Z"
- }
- }
- },
- "urn_azureiot_DeviceManagement_DeviceInformation":{
- "totalStorage":{
- "reported":{
- "value":27900.9730905171,
- "$lastUpdatedTimestamp":"2019-09-30T20:35:43.8478978Z"
- }
- },
- "totalMemory":{
- "reported":{
- "value":4667.82916715811,
- "$lastUpdatedTimestamp":"2019-09-30T20:35:43.8478978Z"
- }
- }
- }
- },
- "instanceOf":"<template-id>",
- "deviceId":"<device-id>",
- "simulated":true
-}
-```
-
-### Device templates (format deprecated as of 3 February 2020)
-
-```json
-{
- "@id":"<template-id>",
- "@type":"DeviceModelDefinition",
- "displayName":"Airbox",
- "capabilityModel":{
- "@id":"<id>",
- "@type":"CapabilityModel",
- "implements":[
- {
- "@id":"<id>",
- "@type":"InterfaceInstance",
- "name":"EnvironmentalSensor",
- "schema":{
- "@id":"<id>",
- "@type":"Interface",
- "comment":"Requires temperature and humidity sensors.",
- "description":"Provides functionality to report temperature, humidity. Provides telemetry, commands and read-write properties",
- "displayName":"Environmental Sensor",
- "contents":[
- {
- "@id":"<id>",
- "@type":"Telemetry",
- "description":"Current temperature on the device",
- "displayName":"Temperature",
- "name":"temp",
- "schema":"double",
- "unit":"Units/Temperature/celsius",
- "valueDetail":{
- "@id":"<id>",
- "@type":"ValueDetail/NumberValueDetail",
- "minValue":{
- "@value":"50"
- }
- },
- "visualizationDetail":{
- "@id":"<id>",
- "@type":"VisualizationDetail"
- }
- },
- {
- "@id":"<id>",
- "@type":"Telemetry",
- "description":"Current humidity on the device",
- "displayName":"Humidity",
- "name":"humid",
- "schema":"integer"
- },
- {
- "@id":"<id>",
- "@type":"Telemetry",
- "description":"Current PM2.5 on the device",
- "displayName":"PM2.5",
- "name":"pm25",
- "schema":"integer"
- },
- {
- "@id":"<id>",
- "@type":"Property",
- "description":"T&H Sensor Model Name",
- "displayName":"T&H Sensor Model",
- "name":"thsensormodel",
- "schema":"string"
- },
- {
- "@id":"<id>",
- "@type":"Property",
- "description":"PM2.5 Sensor Model Name",
- "displayName":"PM2.5 Sensor Model",
- "name":"pm25sensormodel",
- "schema":"string"
- }
- ]
- }
- },
- {
- "@id":"<id>",
- "@type":"InterfaceInstance",
- "name":"urn_azureiot_DeviceManagement_DeviceInformation",
- "schema":{
- "@id":"<id>",
- "@type":"Interface",
- "displayName":"Device information",
- "contents":[
- {
- "@id":"<id>",
- "@type":"Property",
- "comment":"Total available storage on the device in kilobytes. Ex. 20480000 kilobytes.",
- "displayName":"Total storage",
- "name":"totalStorage",
- "displayUnit":"kilobytes",
- "schema":"long"
- },
- {
- "@id":"<id>",
- "@type":"Property",
- "comment":"Total available memory on the device in kilobytes. Ex. 256000 kilobytes.",
- "displayName":"Total memory",
- "name":"totalMemory",
- "displayUnit":"kilobytes",
- "schema":"long"
- }
- ]
- }
- }
- ],
- "displayName":"AAEONAirbox52"
- },
- "solutionModel":{
- "@id":"<id>",
- "@type":"SolutionModel",
- "cloudProperties":[
- {
- "@id":"<id>",
- "@type":"CloudProperty",
- "displayName":"Color",
- "name":"Color",
- "schema":"string",
- "valueDetail":{
- "@id":"<id>",
- "@type":"ValueDetail/StringValueDetail"
- },
- "visualizationDetail":{
- "@id":"<id>",
- "@type":"VisualizationDetail"
- }
- }
- ]
- }
-}
-```
-
-## Next steps
-
-Now that you know how to export your data to Azure Event Hubs, Azure Service Bus, and Azure Blob storage, continue to the next step:
-
-> [!div class="nextstepaction"]
-> [How to run custom analytics with Databricks](./howto-create-custom-analytics.md)
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
[!INCLUDE [iot-edge-version-1.4](includes/iot-edge-version-1.4.md)]
-This article provides instructions for establishing a trusted connection between downstream devices and IoT Edge transparent gateways. In a transparent gateway scenario, one or more devices can pass their messages through a single gateway device that maintains the connection to IoT Hub.
+Here, you find instructions for establishing a trusted connection between downstream devices and IoT Edge transparent [gateways](iot-edge-as-gateway.md). In a transparent gateway scenario, one or more devices can pass their messages through a single gateway device that maintains the connection to IoT Hub. Here, the terms *gateway* and *IoT Edge gateway* refer to an IoT Edge device configured as a transparent gateway.
-There are three general steps to set up a successful transparent gateway connection. This article covers the third step:
+>[!NOTE]
+>A downstream device emits data directly to the Internet or to gateway devices (IoT Edge-enabled or not). A child device can be a downstream device or a gateway device in a nested topology.
+
+There are three general steps to set up a successful transparent gateway connection. This article explains the third step.
1. Configure the gateway device as a server so that downstream devices can connect to it securely. Set up the gateway to receive messages from downstream devices and route them to the proper destination. For those steps, see [Configure an IoT Edge device to act as a transparent gateway](how-to-create-transparent-gateway.md).
There are three general steps to set up a successful transparent gateway connect
1. **Connect the downstream device to the gateway device and start sending messages.**
-This article discusses basic concepts for downstream device connections and guides you in setting up your downstream devices by:
+This article helps you understand downstream device connection components, such as:
-* Explaining transport layer security (TLS) and certificate fundamentals.
-* Explaining how TLS libraries work across different operating systems and how each operating system deals with certificates.
-* Walking through Azure IoT samples in several languages to help get you started.
+* Transport layer security (TLS) and certificate fundamentals.
+* TLS libraries working across different operating systems that handle certificates differently.
-In this article, the terms *gateway* and *IoT Edge gateway* refer to an IoT Edge device configured as a transparent gateway.
-
->[!NOTE]
->A downstream device emits data directly to the Internet or to gateway devices (IoT Edge-enabled or not). A child device can be a downstream device or a gateway device in a nested topology.
+You then walk through Azure IoT samples, in your preferred language, to get your device to send messages to the gateway.
## Prerequisites
-* Have the root CA certificate file that was used to generate the device CA certificate in [Configure an IoT Edge device to act as a transparent gateway](how-to-create-transparent-gateway.md) available on your downstream device. Your downstream device uses this certificate to validate the identity of the gateway device. If you used the demo certificates, the root CA certificate is called **azure-iot-test-only.root.ca.cert.pem**.
-* Have the modified connection string that points to the gateway device, as explained in [Authenticate a downstream device to Azure IoT Hub](how-to-authenticate-downstream-device.md).
+Acquire the following to prepare your downstream device:
-## Prepare a downstream device
+* A downstream device.
-A downstream device can be any application or platform that has an identity created with the Azure IoT Hub cloud service. In many cases, these applications use the [Azure IoT device SDK](../iot-hub/iot-hub-devguide-sdks.md). A downstream device could even be an application running on the IoT Edge gateway device itself.
+ This device can be any application or platform that has an identity created with the Azure IoT Hub cloud service. In many cases, these applications use the [Azure IoT device SDK](../iot-hub/iot-hub-devguide-sdks.md). A downstream device can also be an application running on the IoT Edge gateway device itself.
-This article provides the steps for connecting an IoT device as a downstream device. If you have an IoT Edge device as a downstream device, see [Connect a downstream IoT Edge device to an Azure IoT Edge gateway](how-to-connect-downstream-iot-edge-device.md).
+ Later, this article provides the steps for connecting an *IoT* device as a downstream device. If you prefer to use an *IoT Edge* device as a downstream device, see [Connect Azure IoT Edge devices together to create a hierarchy (nested edge)](how-to-connect-downstream-iot-edge-device.md).
->[!NOTE]
->IoT devices registered with IoT Hub can use [module twins](../iot-hub/iot-hub-devguide-module-twins.md) to isolate different processes, hardware, or functions on a single device. IoT Edge gateways support downstream module connections using symmetric key authentication but not X.509 certificate authentication.
+* A root CA certificate file.
-To connect a downstream device to an IoT Edge gateway, you need two things:
+ This file was used to generate the device CA certificate in [Configure an IoT Edge device to act as a transparent gateway](how-to-create-transparent-gateway.md), which is available on your downstream device.
-* A device or application that's configured with an IoT Hub device connection string appended with information to connect it to the gateway.
+ Your downstream device uses this certificate to validate the identity of the gateway device. This trusted certificate validates the transport layer security (TLS) connections to the gateway device. See usage details in the [Provide the root CA certificate](#provide-the-root-ca-certificate) section.
- This step was completed in the previous article, [Authenticate a downstream device to Azure IoT Hub](how-to-authenticate-downstream-device.md#retrieve-and-modify-connection-string).
+* A modified connection string that points to the gateway device.
-* The device or application has to trust the gateway's **root CA certificate** to validate the transport layer security (TLS) connections to the gateway device.
+ How to modify your connection string is explained in [Authenticate a downstream device to Azure IoT Hub](how-to-authenticate-downstream-device.md).
- This step is explained in detail in the rest of this article. This step can be performed one of two ways: by installing the CA certificate in the operating system's certificate store, or (for certain languages) by referencing the certificate within applications using the Azure IoT SDKs.
+>[!NOTE]
+>IoT devices registered with IoT Hub can use [module twins](../iot-hub/iot-hub-devguide-module-twins.md) to isolate different processes, hardware, or functions on a single device. IoT Edge gateways support downstream module connections, using symmetric key authentication but not X.509 certificate authentication.
-## TLS and certificate fundamentals
+## Understand TLS and certificate fundamentals
The challenge of securely connecting downstream devices to IoT Edge is just like any other secure client/server communication that occurs over the internet. A client and a server securely communicate over the internet using [Transport layer security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security). TLS is built using standard [Public key infrastructure (PKI)](https://en.wikipedia.org/wiki/Public_key_infrastructure) constructs called certificates. TLS is a fairly involved specification and addresses a wide range of topics related to securing two endpoints. This section summarizes the concepts relevant for you to securely connect devices to an IoT Edge gateway.
-When a client connects to a server, the server presents a chain of certificates, called the *server certificate chain*. A certificate chain typically comprises a root certificate authority (CA) certificate, one or more intermediate CA certificates, and finally the server's certificate itself. A client establishes trust with a server by cryptographically verifying the entire server certificate chain. This client validation of the server certificate chain is called *server chain validation*. The client challenges the server to prove possession of the private key associated with the server certificate in a process called *proof of possession*. The combination of server chain validation and proof of possession is called *server authentication*. To validate a server certificate chain, a client needs a copy of the root CA certificate that was used to create (or issue) the server's certificate. Normally when connecting to websites, a browser comes pre-configured with commonly used CA certificates so the client has a seamless process.
+When a client connects to a server, the server presents a chain of certificates, called the *server certificate chain*. A certificate chain typically comprises a root certificate authority (CA) certificate, one or more intermediate CA certificates, and finally the server's certificate itself. A client establishes trust with a server by cryptographically verifying the entire server certificate chain. This client validation of the server certificate chain is called *server chain validation*. The client challenges the server to prove possession of the private key associated with the server certificate in a process called *proof of possession*. The combination of server chain validation and proof of possession is called *server authentication*. To validate a server certificate chain, a client needs a copy of the root CA certificate that was used to create (or issue) the server's certificate. Normally when connecting to websites, a browser comes preconfigured with commonly used CA certificates so the client has a seamless process.
When a device connects to Azure IoT Hub, the device is the client and the IoT Hub cloud service is the server. The IoT Hub cloud service is backed by a root CA certificate called **Baltimore CyberTrust Root**, which is publicly available and widely used. Since the IoT Hub CA certificate is already installed on most devices, many TLS implementations (OpenSSL, Schannel, LibreSSL) automatically use it during server certificate validation. However, a device that successfully connects to IoT Hub may have issues trying to connect to an IoT Edge gateway.
-When a device connects to an IoT Edge gateway, the downstream device is the client and the gateway device is the server. Azure IoT Edge allows you to build gateway certificate chains however they see fit. You may choose to use a public CA certificate, like Baltimore, or use a self-signed (or in-house) root CA certificate. Public CA certificates often have a cost associated with them, so are typically used in production scenarios. Self-signed CA certificates are preferred for development and testing. If you're using the demo certificates, those are self-signed root CA certificates.
+When a device connects to an IoT Edge gateway, the downstream device is the client and the gateway device is the server. Azure IoT Edge allows you to build gateway certificate chains however they see fit. You may choose to use a public CA certificate, like Baltimore, or use a self-signed (or in-house) root CA certificate. Public CA certificates often have a cost associated with them, so are typically used in production scenarios. Self-signed CA certificates are preferred for development and testing. The demo certificates are self-signed root CA certificates.
When you use a self-signed root CA certificate for an IoT Edge gateway, it needs to be installed on or provided to all the downstream devices attempting to connect to the gateway. To learn more about IoT Edge certificates and some production implications, see [IoT Edge certificate usage details](iot-edge-certs.md). ## Provide the root CA certificate
-To verify the gateway device's certificates, the downstream device needs its own copy of the root CA certificate. If you used the scripts provided in the IoT Edge git repository to create test certificates, then the root CA certificate is called **azure-iot-test-only.root.ca.cert.pem**. If you haven't already as part of the other downstream device preparation steps, move this certificate file to any directory on your downstream device. You can use a service like [Azure Key Vault](../key-vault/index.yml) or a function like [Secure copy protocol](https://www.ssh.com/ssh/scp/) to move the certificate file.
+To verify the gateway device's certificates, the downstream device needs its own copy of the root CA certificate. If you used the scripts provided in the IoT Edge git repository to create test certificates, then the root CA certificate is called **azure-iot-test-only.root.ca.cert.pem**.
+
+If you haven't already, move this certificate file to any directory on your downstream device. You can move the file by either installing the CA certificate in the operating system's certificate store or (for certain languages) by referencing the certificate within applications using the Azure IoT SDKs.
+
+You can use a service like [Azure Key Vault](../key-vault/index.yml) or a function like [Secure copy protocol](https://www.ssh.com/ssh/scp/) to move the certificate file.
## Install certificates in the OS
-Once the root CA certificate is on the downstream device, you need to make sure the applications that are connecting to the gateway can access the certificate.
+Once the root CA certificate is on the downstream device, make sure the applications that are connecting to the gateway can access the certificate.
Installing the root CA certificate in the operating system's certificate store generally allows most applications to use the root CA certificate. There are some exceptions, like NodeJS applications that don't use the OS certificate store but rather use the Node runtime's internal certificate store. If you can't install the certificate at the operating system level, skip ahead to [Use certificates with Azure IoT SDKs](#use-certificates-with-azure-iot-sdks).
-### Ubuntu
+Install the root CA certificate on either Ubuntu or Windows.
+
+# [Ubuntu](#tab/ubuntu)
The following commands are an example of how to install a CA certificate on an Ubuntu host. This example assumes that you're using the **azure-iot-test-only.root.ca.cert.pem** certificate from the prerequisites articles, and that you've copied the certificate into a location on the downstream device. ```bash
-sudo cp <path>/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
+sudo cp <file path>/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
+```
+```bash
sudo update-ca-certificates ``` You should see a message that says, "Updating certificates in /etc/ssl/certs... 1 added, 0 removed; done."
-### Windows
+# [Windows](#tab/windows)
The following steps are an example of how to install a CA certificate on a Windows host. This example assumes that you're using the **azure-iot-test-only.root.ca.cert.pem** certificate from the prerequisites articles, and that you've copied the certificate into a location on the downstream device.
import-certificate <file path>\azure-iot-test-only.root.ca.cert.pem -certstorel
You can also install certificates using the **certlm** utility: 1. In the Start menu, search for and select **Manage computer certificates**. A utility called **certlm** opens.
-2. Navigate to **Certificates - Local Computer** > **Trusted Root Certification Authorities**.
-3. Right-click **Certificates** and select **All Tasks** > **Import**. The certificate import wizard should launch.
-4. Follow the steps as directed and import certificate file `<path>/azure-iot-test-only.root.ca.cert.pem`. When completed, you should see a "Successfully imported" message.
+1. Navigate to **Certificates - Local Computer** > **Trusted Root Certification Authorities**.
+1. Right-click **Certificates** and select **All Tasks** > **Import**. The certificate import wizard should launch.
+1. Follow the steps as directed and import certificate file `<file path>/azure-iot-test-only.root.ca.cert.pem`. When completed, you should see a "Successfully imported" message.
You can also install certificates programmatically using .NET APIs, as shown in the .NET sample later in this article.
-Typically applications use the Windows provided TLS stack called [Schannel](/windows/desktop/com/schannel) to securely connect over TLS. Schannel *requires* that any certificates be installed in the Windows certificate store before attempting to establish a TLS connection.
+Typically applications use the Windows provided TLS stack called [Schannel](/windows/desktop/com/schannel) to securely connect over TLS. Schannel *requires* certificates to be installed in the Windows certificate store before attempting to establish a TLS connection.
++ ## Use certificates with Azure IoT SDKs
-This section describes how the Azure IoT SDKs connect to an IoT Edge device using simple sample applications. The goal of all the samples is to connect the device client and send telemetry messages to the gateway, then close the connection and exit.
+[Azure IoT SDKs](../iot-develop/about-iot-sdks.md) connect to an IoT Edge device using simple sample applications. The samples' goal is to connect the device client and send telemetry messages to the gateway, then close the connection and exit.
-Have two things ready before using the application-level samples:
+Before using the application-level samples, obtain the following items:
-* Your downstream device's IoT Hub connection string modified to point to the gateway device, and any certificates required to authenticate your downstream device to IoT Hub. For more information, see [Authenticate a downstream device to Azure IoT Hub](how-to-authenticate-downstream-device.md).
+* Your IoT Hub connection string, from your downstream device, modified to point to the gateway device.
+
+* Any certificates required to authenticate your downstream device to IoT Hub. For more information, see [Authenticate a downstream device to Azure IoT Hub](how-to-authenticate-downstream-device.md).
* The full path to the root CA certificate that you copied and saved somewhere on your downstream device.
- For example, `<path>/azure-iot-test-only.root.ca.cert.pem`.
+ For example: `<file path>/azure-iot-test-only.root.ca.cert.pem`.
+
+Now you're ready to use certificates with a sample in the language of your choice:
-### NodeJS
+# [NodeJS](#tab/nodejs)
This section provides a sample application to connect an Azure IoT NodeJS device client to an IoT Edge gateway. For NodeJS applications, you must install the root CA certificate at the application level as shown here. NodeJS applications don't use the system's certificate store. 1. Get the sample for **edge_downstream_device.js** from the [Azure IoT device SDK for Node.js samples repo](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples).
-2. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file.
-3. In the edge_downstream_device.js file, update the **connectionString** and **edge_ca_cert_path** variables.
-4. Refer to the SDK documentation for instructions on how to run the sample on your device.
+1. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file.
+1. In the edge_downstream_device.js file, update the **connectionString** and **edge_ca_cert_path** variables.
+1. Refer to the SDK documentation for instructions on how to run the sample on your device.
To understand the sample that you're running, the following code snippet is how the client SDK reads the certificate file and uses it to establish a secure TLS connection:
var options = {
}; ```
-### .NET
+# [.NET](#tab/dotnet)
This section introduces a sample application to connect an Azure IoT .NET device client to an IoT Edge gateway. However, .NET applications are automatically able to use any installed certificates in the system's certificate store on both Linux and Windows hosts. 1. Get the sample for **EdgeDownstreamDevice** from the [IoT Edge .NET samples folder](https://github.com/Azure/iotedge/tree/master/samples/dotnet/EdgeDownstreamDevice).
-2. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file.
-3. In the **Properties / launchSettings.json** file, update the **DEVICE_CONNECTION_STRING** and **CA_CERTIFICATE_PATH** variables. If you want to use the certificate installed in the trusted certificate store on the host system, leave this variable blank.
-4. Refer to the SDK documentation for instructions on how to run the sample on your device.
+1. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file.
+1. In the **Properties / launchSettings.json** file, update the **DEVICE_CONNECTION_STRING** and **CA_CERTIFICATE_PATH** variables. If you want to use the certificate installed in the trusted certificate store on the host system, leave this variable blank.
+1. Refer to the SDK documentation for instructions on how to run the sample on your device.
-To programmatically install a trusted certificate in the certificate store via a .NET application, refer to the **InstallCACert()** function in the **EdgeDownstreamDevice / Program.cs** file. This operation is idempotent, so can be run multiple times with the same values with no additional effect.
+To programmatically install a trusted certificate in the certificate store via a .NET application, refer to the **InstallCACert()** function in the **EdgeDownstreamDevice / Program.cs** file. This operation is idempotent, so can be run multiple times with the same values with no extra effect.
-### C
+# [C](#tab/c)
This section introduces a sample application to connect an Azure IoT C device client to an IoT Edge gateway. The C SDK can operate with many TLS libraries, including OpenSSL, WolfSSL, and Schannel. For more information, see the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). 1. Get the **iotedge_downstream_device_sample** application from the [Azure IoT device SDK for C samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples).
-2. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file.
-3. In the iotedge_downstream_device_sample.c file, update the **connectionString** and **edge_ca_cert_path** variables.
-4. Refer to the SDK documentation for instructions on how to run the sample on your device.
+1. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file.
+1. In the iotedge_downstream_device_sample.c file, update the **connectionString** and **edge_ca_cert_path** variables.
+1. Refer to the SDK documentation for instructions on how to run the sample on your device.
The Azure IoT device SDK for C provides an option to register a CA certificate when setting up the client. This operation doesn't install the certificate anywhere, but rather uses a string format of the certificate in memory. The saved certificate is provided to the underlying TLS stack when establishing a connection.
The Azure IoT device SDK for C provides an option to register a CA certificate w
On Windows hosts, if you're not using OpenSSL or another TLS library, the SDK default to using Schannel. For Schannel to work, the IoT Edge root CA certificate should be installed in the Windows certificate store, not set using the `IoTHubDeviceClient_SetOption` operation.
-### Java
+# [Java](#tab/java)
This section introduces a sample application to connect an Azure IoT Java device client to an IoT Edge gateway. 1. Get the sample for **Send-event** from the [Azure IoT device SDK for Java samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/iothub/device/iot-device-samples).
-2. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file.
-3. Refer to the SDK documentation for instructions on how to run the sample on your device.
+1. Make sure that you have all the prerequisites to run the sample by reviewing the **readme.md** file.
+1. Refer to the SDK documentation for instructions on how to run the sample on your device.
-### Python
+# [Python](#tab/python)
This section introduces a sample application to connect an Azure IoT Python device client to an IoT Edge gateway. 1. Get the sample for **send_message_downstream** from the [Azure IoT device SDK for Python samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-edge-scenarios).
-2. Set the `IOTHUB_DEVICE_CONNECTION_STRING` and `IOTEDGE_ROOT_CA_CERT_PATH` environment variables as specified in the Python script comments.
-3. Refer to the SDK documentation for any additional instructions on how to run the sample on your device.
+1. Set the `IOTHUB_DEVICE_CONNECTION_STRING` and `IOTEDGE_ROOT_CA_CERT_PATH` environment variables as specified in the Python script comments.
+1. Refer to the SDK documentation for more instructions on how to run the sample on your device.
++ ## Test the gateway connection
Use this sample command on the downstream device to test that it can connect to
openssl s_client -connect mygateway.contoso.com:8883 -CAfile <CERTDIR>/certs/azure-iot-test-only.root.ca.cert.pem -showcerts ```
-This command tests connections over MQTTS (port 8883). If you're using a different protocol, adjust the command as necessary for AMQPS (5671) or HTTPS (443)
+This command tests connection over MQTTS (port 8883). If you're using a different protocol, adjust the command as necessary for AMQPS (5671) or HTTPS (443).
-The output of this command may be long, including information about all the certificates in the chain. If your connection is successful, you'll see a line like `Verification: OK` or `Verify return code: 0 (ok)`.
+The output of this command may be long, including information about all the certificates in the chain. If your connection is successful, you see a line like `Verification: OK` or `Verify return code: 0 (ok)`.
:::image type="content" source="./media/how-to-connect-downstream-device/verification-ok.png" alt-text="Screenshot of how to verify a gateway connection."::: ## Troubleshoot the gateway connection
-If your downstream device has intermittent connection to its gateway device, try the following steps for resolution.
+If your downstream device connection to its gateway device is unstable, consider these questions for a resolution.
-1. Is the gateway hostname in the connection string the same as the hostname value in the IoT Edge config file on the gateway device?
-2. Is the gateway hostname resolvable to an IP Address? You can resolve intermittent connections either by using DNS or by adding a host file entry on the downstream device.
-3. Are communication ports open in your firewall? Communication based on the protocol used (MQTTS:8883/AMQPS:5671/HTTPS:433) must be possible between downstream device and the transparent IoT Edge.
+* Is the gateway hostname in the connection string the same as the hostname value in the IoT Edge config file on the gateway device?
+* Is the gateway hostname resolvable to an IP Address? You can resolve intermittent connections either by using DNS or by adding a host file entry on the downstream device.
+* Are communication ports open in your firewall? Communication based on the protocol used (MQTTS:8883/AMQPS:5671/HTTPS:433) must be possible between downstream device and the transparent IoT Edge.
## Next steps
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
zone_pivot_groups: iotedge-dev
This article shows you how to use Visual Studio Code to develop and debug IoT Edge modules in multiple languages and multiple architectures. On your development computer, you can use Visual Studio Code to attach and debug your module in a local or remote module container.
-You can choose either the **Azure IoT Edge Dev Tool** CLI or the **Azure IoT Edge tools for Visual Studio Code** extension as your IoT Edge development tool. Use the tool selector button at the beginning to choose your tool option for this article.
+You can choose either the **Azure IoT Edge Dev Tool** command-line tool (CLI) or the **Azure IoT Edge tools for Visual Studio Code** extension as your IoT Edge development tool. Use the tool selector button at the beginning to choose your tool option for this article.
Visual Studio Code supports writing IoT Edge modules in the following programming languages:
Visual Studio Code supports writing IoT Edge modules in the following programmin
Azure IoT Edge supports the following device architectures:
-* X64
-* ARM32
+* AMD64
+* ARM32v7
* ARM64 For more information about supported operating systems, languages, and architectures, see [Language and architecture support](module-development.md#language-and-architecture-support).
To build and deploy your module image, you need Docker to build the module image
::: zone pivot="iotedge-dev-cli" -- Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) with the following command to enable you to debug, run, and test your IoT Edge solution. [Python (3.6/3.7)](https://www.python.org/downloads/) and [Pip3](https://pip.pypa.io/en/stable/installation/) are required.
+- Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) with the following command to enable you to debug, run, and test your IoT Edge solution. [Python (3.6 or 3.7)](https://www.python.org/downloads/) and [Pip3](https://pip.pypa.io/en/stable/installation/) are required.
```bash pip3 install iotedgedev
For example:
::: zone-end +
+### Set IoT Edge runtime version
+
+The latest stable IoT Edge system module version is 1.4. Set your system modules to version 1.4.
+
+1. In Visual Studio Code, open *deployment.debug.template.json* deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) is a JSON document that describes the modules to be configured on the targeted IoT Edge device.
+1. Change the runtime version for the system runtime module images *edgeAgent* and *edgeHub*. For example, if you want to use the IoT Edge runtime version 1.4, change the following lines in the deployment manifest file:
+
+ ```json
+ ...
+ "systemModules": {
+ "edgeAgent": {
+ ...
+ "image": "mcr.microsoft.com/azureiotedge-agent:1.4",
+ ...
+ "edgeHub": {
+ ...
+ "image": "mcr.microsoft.com/azureiotedge-hub:1.4",
+ ...
+ ```
++ ## Add more modules To add more modules to your solution, change to the *modules* directory and add them there.
You can check your container status from your device or virtual machine by runni
::: zone pivot="iotedge-dev-cli"
+#### Sign in to Docker
+
+Provide your container registry credentials to Docker so that it can push your container image to storage in the registry.
+
+1. Sign in to Docker with the Azure Container Registry (ACR) credentials that you saved after creating the registry.
+
+ ```bash
+ docker login -u <ACR username> -p <ACR password> <ACR login server>
+ ```
+
+ You may receive a security warning recommending the use of `--password-stdin`. While that is a recommended best practice for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
+
+1. Sign in to the Azure Container Registry. You may need to [Install Azure CLI](/cli/azure/install-azure-cli) to use the `az` command. This command asks for your user name and password found in your container registry in **Settings** > **Access keys**.
+
+ ```azurecli
+ az acr login -n <ACR registry name>
+ ```
+>[!TIP]
+>If you get logged out at any point in this tutorial, repeat the Docker and Azure Container Registry sign in steps to continue.
+ #### Build module Docker image Use the module's Dockerfile to [build](https://docs.docker.com/engine/reference/commandline/build/) the module Docker image.
docker push myacr.azurecr.io/filtermodule:0.0.1-amd64
#### Deploy the module to the IoT Edge device.
-Use the [IoT Edge Azure CLI set-modules](/cli/azure/iot/edge#az-iot-edge-set-modules) command to deploy the modules to the Azure IoT Hub. For example, to deploy the modules defined in the *deployment.debug.amd64.json* file to IoT Hub *my-iot-hub* for the IoT Edge device *my-device*, use the following command:
+Use the [IoT Edge Azure CLI set-modules](/cli/azure/iot/edge#az-iot-edge-set-modules) command to deploy the modules to the Azure IoT Hub. For example, to deploy the modules defined in the *deployment.debug.template.json* file to IoT Hub *my-iot-hub* for the IoT Edge device *my-device*, use the following command:
```azurecli az iot edge set-modules --hub-name my-iot-hub --device-id my-device --content ./deployment.debug.template.json --login "HostName=my-iot-hub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=<SharedAccessKey>" ``` > [!TIP]
-> You can find your IoT Hub connection string in the Azure portal in your IoT Hub > **Security settings** > **Shared access policies** > **iothubowner**.
+> You can find your IoT Hub shared access key in the Azure portal in your IoT Hub > **Security settings** > **Shared access policies** > **iothubowner**.
> ::: zone-end
iot-edge Iot Edge Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-runtime.md
By default, the IoT Edge hub only accepts connections secured with Transport Lay
If a client connects on port 8883 (MQTTS) or 5671 (AMQPS) to the IoT Edge hub, a TLS channel must be built. During the TLS handshake, the IoT Edge hub sends its certificate chain that the client needs to validate. In order to validate the certificate chain, the root certificate of the IoT Edge hub must be installed as a trusted certificate on the client. If the root certificate isn't trusted, the client library will be rejected by the IoT Edge hub with a certificate verification error.
-The steps to follow to install this root certificate of the broker on device clients are described in the [transparent gateway](how-to-create-transparent-gateway.md) and in the [prepare a downstream device](how-to-connect-downstream-device.md#prepare-a-downstream-device) documentation. Modules can use the same root certificate as the IoT Edge hub by using the IoT Edge daemon API.
+The steps to follow to install this root certificate of the broker on device clients are described in the [transparent gateway](how-to-create-transparent-gateway.md) and in the [prepare a downstream device](how-to-connect-downstream-device.md#prerequisites) documentation. Modules can use the same root certificate as the IoT Edge hub by using the IoT Edge daemon API.
#### Authentication
-The IoT Edge Hub only accepts connections from devices or modules that have an IoT Hub identity, for example that have been registered in IoT Hub and have one of the three client authentication methods supported by IoT hub to provide prove their identity: [Symmetric keys authentication](how-to-authenticate-downstream-device.md#symmetric-key-authentication), [X.509 self-signed authentication](how-to-authenticate-downstream-device.md#x509-self-signed-authentication), [X.509 CA signed authentication](how-to-authenticate-downstream-device.md#x509-ca-signed-authentication). These IoT Hub identities can be verified locally by the IoT Edge hub so connections can still be made while offline.
+The IoT Edge Hub only accepts connections from devices or modules that have an IoT Hub identity. For example, those that have been registered in IoT Hub and have one of the three client authentication methods supported by IoT hub to prove their identity: [Symmetric keys authentication](how-to-authenticate-downstream-device.md#symmetric-key-authentication), [X.509 self-signed authentication](how-to-authenticate-downstream-device.md#x509-self-signed-authentication), [X.509 CA signed authentication](how-to-authenticate-downstream-device.md#x509-ca-signed-authentication). These IoT Hub identities can be verified locally by the IoT Edge hub so connections can still be made while offline.
IoT Edge modules currently only support symmetric key authentication.
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-c-module.md
-
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Tutorial - develop C module for Linux - Azure IoT Edge | Microsoft Docs
-description: This tutorial shows you how to create an IoT Edge module with C code and deploy it to a Linux device running IoT Edge
----- Previously updated : 07/30/2020------
-# Tutorial: Develop a C IoT Edge module using Linux containers
--
-Use Visual Studio Code to develop C code and deploy it to a device running Azure IoT Edge.
-
-You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Use Visual Studio Code to create an IoT Edge module in C
-> * Use Visual Studio Code and Docker to create a docker image and publish it to a container registry
-> * Deploy the module to your IoT Edge device
-> * View generated data
-
-The IoT Edge module that you create in this tutorial filters the temperature data generated by your device. It only sends messages upstream if the temperature is above a specified threshold. This type of analysis at the edge is useful for reducing the amount of data communicated to and stored in the cloud.
--
-## Prerequisites
-
-This tutorial demonstrates how to develop a module in **C** using **Visual Studio Code**, and how to deploy it to an IoT Edge device.
-
-Use the following table to understand your options for developing and deploying C modules using Linux containers:
-
-| C | Visual Studio Code | Visual Studio |
-| - | | - |
-| **Linux AMD64** | ![Use Visual Studio Code for C modules on Linux AMD64](./medi64](./media/tutorial-c-module/green-check.png) |
-| **Linux ARM32** | ![Use Visual Studio Code for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) |
-| **Linux ARM64** | ![Use Visual Studio Code for C modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM64](./media/tutorial-c-module/green-check.png) |
-
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
-
-* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
-* A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
-* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers.
-
-To develop an IoT Edge module in C, install the following prerequisites on your development machine:
-
-* [C/C++ extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools) for Visual Studio Code.
-
-Installing the Azure IoT C SDK isn't required for this tutorial, but can provide helpful functionality like intellisense and reading program definitions. For installation information, see [Azure IoT C SDKs and Libraries](https://github.com/Azure/azure-iot-sdk-c).
-
-## Create a module project
-
-The following steps create an IoT Edge module project for C by using Visual Studio Code and the Azure IoT Edge extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
-
-### Create a new project
-
-Create a C solution template that you can customize with your own code.
-
-1. Select **View** > **Command Palette** to open the Visual Studio Code command palette.
-
-2. In the command palette, type and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you've already signed in, you can skip this step.
-
-3. In the command palette, type and run the command **Azure IoT Edge: New IoT Edge solution**. Follow the prompts in the command palette to create your solution.
-
- | Field | Value |
- | -- | -- |
- | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
- | Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. |
- | Select module template | Choose **C Module**. |
- | Provide a module name | Name your module **CModule**. |
- | Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server from the Overview page of your container registry in the Azure portal. <br><br> The final image repository looks like \<registry name\>.azurecr.io/cmodule. |
-
- ![Provide Docker image repository](./media/tutorial-c-module/repository.png)
-
-### Add your registry credentials
-
-The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device.
-
-The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-
-1. In the Visual Studio Code explorer, open the .env file.
-2. Update the fields with the **username** and **password** values that you copied from your Azure container registry.
-3. Save this file.
-
->[!NOTE]
->This tutorial uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
-
-### Select your target architecture
-
-Currently, Visual Studio Code can develop C modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.
-
-1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
-
-2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so will keep the default **amd64**.
-
-### Update the module with custom code
-
-The default module code receives messages on an input queue and passes them along through an output queue. Let's add more code so the module processes messages at the edge before forwarding them to IoT Hub. Update the module so that it analyzes the temperature data in each message, and only sends the message to IoT Hub if the temperature exceeds a certain threshold.
-
-1. The data from the sensor in this scenario comes in JSON format. To filter messages in JSON format, import a JSON library for C. This tutorial uses Parson.
-
- 1. Download the [Parson GitHub repository](https://github.com/kgabis/parson). Copy the **parson.c** and **parson.h** files into the **CModule** folder.
-
- 2. Open **modules** > **CModule** > **CMakeLists.txt**. At the top of the file, import the Parson files as a library called **my_parson**.
-
- ```txt
- add_library(my_parson
- parson.c
- parson.h
- )
- ```
-
- 3. Add `my_parson` to the list of libraries in the **target_link_libraries** function of CMakeLists.txt.
-
- 4. Save the **CMakeLists.txt** file.
-
- 5. Open **modules** > **CModule** > **main.c**. At the bottom of the list of include statements, add a new one to include `parson.h` for JSON support:
-
- ```c
- #include "parson.h"
- ```
-
-1. In the **main.c** file, add a global variable called `temperatureThreshold` after the include section. This variable sets the value that the measured temperature must exceed in order for the data to be sent to IoT Hub.
-
- ```c
- static double temperatureThreshold = 25;
- ```
-
-1. Find the `CreateMessageInstance` function in main.c. Replace the inner if-else statement with the following code that adds a few lines of functionality:
-
- ```c
- if ((messageInstance->messageHandle = IoTHubMessage_Clone(message)) == NULL)
- {
- free(messageInstance);
- messageInstance = NULL;
- }
- else
- {
- messageInstance->messageTrackingId = messagesReceivedByInput1Queue;
- MAP_HANDLE propMap = IoTHubMessage_Properties(messageInstance->messageHandle);
- if (Map_AddOrUpdate(propMap, "MessageType", "Alert") != MAP_OK)
- {
- printf("ERROR: Map_AddOrUpdate Failed!\r\n");
- }
- }
- ```
-
- The new lines of code in the else statement add a new property to the message, which labels the message as an alert. This code labels all messages as alerts, because we'll add functionality that only sends messages to IoT Hub if they report high temperatures.
-
-1. Replace the entire `InputQueue1Callback` function with the following code. This function implements the actual messaging filter. When a message is received, it checks whether the reported temperature exceeds the threshold. If yes, then it forwards the message through its output queue. If not, then it ignores the message.
-
- ```c
- static unsigned char *bytearray_to_str(const unsigned char *buffer, size_t len)
- {
- unsigned char *ret = (unsigned char *)malloc(len + 1);
- memcpy(ret, buffer, len);
- ret[len] = '\0';
- return ret;
- }
-
- static IOTHUBMESSAGE_DISPOSITION_RESULT InputQueue1Callback(IOTHUB_MESSAGE_HANDLE message, void* userContextCallback)
- {
- IOTHUBMESSAGE_DISPOSITION_RESULT result;
- IOTHUB_CLIENT_RESULT clientResult;
- IOTHUB_MODULE_CLIENT_LL_HANDLE iotHubModuleClientHandle = (IOTHUB_MODULE_CLIENT_LL_HANDLE)userContextCallback;
-
- unsigned const char* messageBody;
- size_t contentSize;
-
- if (IoTHubMessage_GetByteArray(message, &messageBody, &contentSize) == IOTHUB_MESSAGE_OK)
- {
- messageBody = bytearray_to_str(messageBody, contentSize);
- } else
- {
- messageBody = "<null>";
- }
-
- printf("Received Message [%zu]\r\n Data: [%s]\r\n",
- messagesReceivedByInput1Queue, messageBody);
-
- // Check if the message reports temperatures higher than the threshold
- JSON_Value *root_value = json_parse_string(messageBody);
- JSON_Object *root_object = json_value_get_object(root_value);
- double temperature;
- if (json_object_dotget_value(root_object, "machine.temperature") != NULL && (temperature = json_object_dotget_number(root_object, "machine.temperature")) > temperatureThreshold)
- {
- printf("Machine temperature %f exceeds threshold %f\r\n", temperature, temperatureThreshold);
- // This message should be sent to next stop in the pipeline, namely "output1". What happens at "outpu1" is determined
- // by the configuration of the Edge routing table setup.
- MESSAGE_INSTANCE *messageInstance = CreateMessageInstance(message);
- if (NULL == messageInstance)
- {
- result = IOTHUBMESSAGE_ABANDONED;
- }
- else
- {
- printf("Sending message (%zu) to the next stage in pipeline\n", messagesReceivedByInput1Queue);
-
- clientResult = IoTHubModuleClient_LL_SendEventToOutputAsync(iotHubModuleClientHandle, messageInstance->messageHandle, "output1", SendConfirmationCallback, (void *)messageInstance);
- if (clientResult != IOTHUB_CLIENT_OK)
- {
- IoTHubMessage_Destroy(messageInstance->messageHandle);
- free(messageInstance);
- printf("IoTHubModuleClient_LL_SendEventToOutputAsync failed on sending msg#=%zu, err=%d\n", messagesReceivedByInput1Queue, clientResult);
- result = IOTHUBMESSAGE_ABANDONED;
- }
- else
- {
- result = IOTHUBMESSAGE_ACCEPTED;
- }
- }
- }
- else
- {
- printf("Not sending message (%zu) to the next stage in pipeline.\r\n", messagesReceivedByInput1Queue);
- result = IOTHUBMESSAGE_ACCEPTED;
- }
-
- messagesReceivedByInput1Queue++;
- return result;
- }
- ```
-
-1. Add a `moduleTwinCallback` function. This method receives updates on the desired properties from the module twin, and updates the **temperatureThreshold** variable to match. All modules have their own module twin, which lets you configure the code running inside a module directly from the cloud.
-
- ```c
- static void moduleTwinCallback(DEVICE_TWIN_UPDATE_STATE update_state, const unsigned char* payLoad, size_t size, void* userContextCallback)
- {
- printf("\r\nTwin callback called with (state=%s, size=%zu):\r\n%s\r\n",
- MU_ENUM_TO_STRING(DEVICE_TWIN_UPDATE_STATE, update_state), size, payLoad);
- JSON_Value *root_value = json_parse_string(payLoad);
- JSON_Object *root_object = json_value_get_object(root_value);
- if (json_object_dotget_value(root_object, "desired.TemperatureThreshold") != NULL) {
- temperatureThreshold = json_object_dotget_number(root_object, "desired.TemperatureThreshold");
- }
- if (json_object_get_value(root_object, "TemperatureThreshold") != NULL) {
- temperatureThreshold = json_object_get_number(root_object, "TemperatureThreshold");
- }
- }
- ```
-
-1. Find the `SetupCallbacksForModule` function. Replace the function with the following code that adds an **else if** statement to check if the module twin has been updated.
-
- ```c
- static int SetupCallbacksForModule(IOTHUB_MODULE_CLIENT_LL_HANDLE iotHubModuleClientHandle)
- {
- int ret;
-
- if (IoTHubModuleClient_LL_SetInputMessageCallback(iotHubModuleClientHandle, "input1", InputQueue1Callback, (void*)iotHubModuleClientHandle) != IOTHUB_CLIENT_OK)
- {
- printf("ERROR: IoTHubModuleClient_LL_SetInputMessageCallback(\"input1\")..........FAILED!\r\n");
- ret = MU_FAILURE;
- }
- else if (IoTHubModuleClient_LL_SetModuleTwinCallback(iotHubModuleClientHandle, moduleTwinCallback, (void*)iotHubModuleClientHandle) != IOTHUB_CLIENT_OK)
- {
- printf("ERROR: IoTHubModuleClient_LL_SetModuleTwinCallback(default)..........FAILED!\r\n");
- ret = MU_FAILURE;
- }
- else
- {
- ret = 0;
- }
-
- return ret;
- }
- ```
-
-1. Save the main.c file.
-
-1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
-
-1. Add the CModule module twin to the deployment manifest. Insert the following JSON content at the bottom of the `moduleContent` section, after the `$edgeHub` module twin:
-
- ```json
- "CModule": {
- "properties.desired":{
- "TemperatureThreshold":25
- }
- }
- ```
-
- ![Add CModule twin to deployment template](./media/tutorial-c-module/module-twin.png)
-
-1. Save the **deployment.template.json** file.
-
-## Build and push your module
-
-In the previous section, you created an IoT Edge solution and added code to the CModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.
-
-1. Open the Visual Studio Code terminal by selecting **View** > **Terminal**.
-
-2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
-
- ```bash
- docker login -u <ACR username> -p <ACR password> <ACR login server>
- ```
-
- You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-
-3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
-
- The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
-
- This process may take several minutes the first time, but is faster the next time that you run the commands.
-
-## Deploy modules to device
-
-Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
-
-Make sure that your IoT Edge device is up and running.
-
-1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
-
-2. Right-click the name of your IoT Edge device, then select **Create Deployment for Single Device**.
-
-3. Select the **deployment.amd64.json** file in the **config** folder and then click **Select Edge Deployment Manifest**. Don't use the deployment.template.json file, as that file is only a template.
-
-4. Under your device, expand **Modules** to see a list of deployed and running modules. Click the refresh button. You should see the new **CModule** running along with the **SimulatedTemperatureSensor** module and the **$edgeAgent** and **$edgeHub**.
-
- It may take a few minutes for the modules to start. The IoT Edge runtime needs to receive its new deployment manifest, pull down the module images from the container runtime, then start each new module.
-
-## View generated data
-
-Once you apply the deployment manifest to your IoT Edge device, the IoT Edge runtime on the device collects the new deployment information and starts executing on it. Any modules running on the device that aren't included in the deployment manifest are stopped. Any modules missing from the device are started.
-
-1. In the Visual Studio Code explorer, right-click the name of your IoT Edge device and select **Start Monitoring Built-in Event Endpoint**.
-
-2. View the messages arriving at your IoT Hub. It may take a while for the messages to arrive, because the IoT Edge device has to receive its new deployment and start all the modules. Then, the changes we made to the CModule code wait until the machine temperature reaches 25 degrees before sending messages. It also adds the message type **Alert** to any messages that reach that temperature threshold.
-
- ![View messages arriving at IoT Hub](./media/tutorial-c-module/view-d2c-message.png)
-
-## Edit the module twin
-
-We used the CModule module twin in the deployment manifest to set the temperature threshold at 25 degrees. You can use the module twin to change the functionality without having to update the module code.
-
-1. In Visual Studio Code, expand the details under your IoT Edge device to see the running modules.
-
-2. Right-click **CModule** and select **Edit module twin**.
-
-3. Find **TemperatureThreshold** in the desired properties. Change its value to a new temperature 5 degrees to 10 degrees higher than the latest reported temperature.
-
-4. Save the module twin file.
-
-5. Right-click anywhere in the module twin editing pane and select **Update module twin**.
-
-6. Monitor the incoming device-to-cloud messages. You should see the messages stop until the new temperature threshold is reached.
-
-## Clean up resources
-
-If you continue to the next recommended article, you can keep your resources and configurations and reuse them. You can also keep using the same IoT Edge device as a test device.
-
-Otherwise, you can delete the local configurations and the Azure resources that you used in this article to avoid charges.
--
-## Next steps
-
-In this tutorial, you created an IoT Edge module that contains code to filter raw data generated by your IoT Edge device.
-
-You can continue on to the next tutorials to learn how Azure IoT Edge can help you deploy Azure cloud services to process and analyze data at the edge.
-
-> [!div class="nextstepaction"]
-> [Functions](tutorial-deploy-function.md)
-> [Stream Analytics](tutorial-deploy-stream-analytics.md)
-> [Machine Learning](tutorial-deploy-machine-learning.md)
-> [Custom Vision Service](tutorial-deploy-custom-vision.md)
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-csharp-module.md
- Title: Tutorial - Develop C# module for Linux using Azure IoT Edge
-description: This tutorial shows you how to create an IoT Edge module with C# code and deploy it to a Linux IoT Edge device.
----- Previously updated : 07/30/2021------
-# Tutorial: Develop a C# IoT Edge module using Linux containers
--
-Use Visual Studio Code to develop C# code and deploy it to a device running Azure IoT Edge.
-
-You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the simulated IoT Edge device that you created in the quickstarts. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Use Visual Studio Code to create an IoT Edge module that's based on the .NET Core SDK.
-> * Use Visual Studio Code and Docker to create a Docker image and publish it to your registry.
-> * Deploy the module to your IoT Edge device.
-> * View generated data.
-
-The IoT Edge module that you create in this tutorial filters the temperature data that's generated by your device. It only sends messages upstream if the temperature is above a specified threshold. This type of analysis at the edge is useful for reducing the amount of data that's communicated to and stored in the cloud.
--
-## Prerequisites
-
-This tutorial demonstrates how to develop a module in **C#** using **Visual Studio Code** and deploy it to an IoT Edge device. If you're developing modules using Windows containers, go to [Develop a C# IoT Edge module using Windows containers](tutorial-csharp-module-windows.md) instead.
-
-Use the following table to understand your options for developing and deploying C# modules using Linux containers:
-
-| C# | Visual Studio Code | Visual Studio |
-| -- | | - |
-| **Linux AMD64** | ![C# modules for LinuxAMD64 in Visual Studio Code](./medi64 in Visual Studio](./media/tutorial-c-module/green-check.png) |
-| **Linux ARM32** | ![C# modules for LinuxARM32 in Visual Studio Code](./media/tutorial-c-module/green-check.png) | ![C# modules for LinuxARM32 in Visual Studio](./media/tutorial-c-module/green-check.png) |
-| **Linux ARM64** | ![C# modules for LinuxARM64 in Visual Studio Code](./media/tutorial-c-module/green-check.png) | ![C# modules for LinuxARM64 in Visual Studio](./media/tutorial-c-module/green-check.png) |
-
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment, [Develop an IoT Edge module using Linux containers](tutorial-develop-for-linux.md). After completing that tutorial, you already should have the following prerequisites:
-
-* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
-* A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
-* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers.
-
-To complete these tutorials, prepare the following additional prerequisites on your development machine:
-
-* [C# for Visual Studio Code (powered by OmniSharp) extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp).
-* [.NET Core SDK](https://dotnet.microsoft.com/download).
-
-## Create a module project
-
-The following steps create an IoT Edge module project for C# by using Visual Studio Code and the Azure IoT Edge extension. Once you have a project template created, add new code so that the module filters out messages based on their reported properties.
-
-### Create a new project
-
-Create a C# solution template that you can customize with your own code.
-
-1. In Visual Studio Code, select **View** > **Command Palette** to open the Visual Studio Code command palette.
-
-2. In the command palette, enter and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you're already signed in, you can skip this step.
-
-3. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. Follow the prompts in the command palette to create your solution.
-
- | Field | Value |
- | -- | -- |
- | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
- | Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. |
- | Select module template | Choose **C# Module**. |
- | Provide a module name | Name your module **CSharpModule**. |
- | Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server from the Overview page of your container registry in the Azure portal. <br><br>The final image repository looks like \<registry name\>.azurecr.io/csharpmodule. |
-
- ![Provide Docker image repository](./media/tutorial-csharp-module/repository.png)
-
-### Add your registry credentials
-
-The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device. Use the credentials from the **Access keys** section of your Azure container registry.
-
-The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-
-1. In the Visual Studio Code explorer, open the **.env** file.
-2. Update the fields with the **username** and **password** values from your Azure container registry.
-3. Save this file.
-
->[!NOTE]
->This tutorial uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
-
-### Select your target architecture
-
-Currently, Visual Studio Code can develop C# modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.
-
-1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
-
-2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so will keep the default **amd64**.
-
-### Update the module with custom code
-
-1. In the Visual Studio Code explorer, open **modules** > **CSharpModule** > **ModuleBackgroundService.cs**.
-
-1. At the top of the **CSharpModule** namespace, add three **using** statements for types that are used later:
-
- ```csharp
- using System.Collections.Generic; // For KeyValuePair<>
- using Microsoft.Azure.Devices.Shared; // For TwinCollection
- using Newtonsoft.Json; // For JsonConvert
- ```
-
-1. Add the **temperatureThreshold** variable to the **Program** class. This variable sets the value that the measured temperature must exceed for the data to be sent to the IoT hub.
-
- ```csharp
- static int temperatureThreshold { get; set; } = 25;
- ```
-
-1. Add the **MessageBody**, **Machine**, and **Ambient** classes to the **Program** class. These classes define the expected schema for the body of incoming messages.
-
- ```csharp
- class MessageBody
- {
- public Machine machine {get;set;}
- public Ambient ambient {get; set;}
- public string timeCreated {get; set;}
- }
- class Machine
- {
- public double temperature {get; set;}
- public double pressure {get; set;}
- }
- class Ambient
- {
- public double temperature {get; set;}
- public int humidity {get; set;}
- }
- ```
-
-1. Find the **Init** function. This function creates and configures a **ModuleClient** object, which allows the module to connect to the local Azure IoT Edge runtime to send and receive messages. After creating the **ModuleClient**, the code reads the **temperatureThreshold** value from the module twin's desired properties. The code registers a callback to receive messages from an IoT Edge hub via an endpoint called **input1**.
-
- Replace the **SetInputMessageHandlerAsync** method with a new one that updates the name of the endpoint and the method that is called when input arrives. Also, add a **SetDesiredPropertyUpdateCallbackAsync** method for updates to the desired properties. To make this change, replace the last line of the **Init** method with the following code:
-
- ```csharp
- // Register a callback for messages that are received by the module.
- // await ioTHubModuleClient.SetInputMessageHandlerAsync("input1", PipeMessage, iotHubModuleClient);
-
- // Read the TemperatureThreshold value from the module twin's desired properties
- var moduleTwin = await ioTHubModuleClient.GetTwinAsync();
- await OnDesiredPropertiesUpdate(moduleTwin.Properties.Desired, ioTHubModuleClient);
-
- // Attach a callback for updates to the module twin's desired properties.
- await ioTHubModuleClient.SetDesiredPropertyUpdateCallbackAsync(OnDesiredPropertiesUpdate, null);
-
- // Register a callback for messages that are received by the module. Messages received on the inputFromSensor endpoint are sent to the FilterMessages method.
- await ioTHubModuleClient.SetInputMessageHandlerAsync("inputFromSensor", FilterMessages, ioTHubModuleClient);
- ```
-
-1. Add the **onDesiredPropertiesUpdate** method to the **Program** class. This method receives updates on the desired properties from the module twin, and updates the **temperatureThreshold** variable to match. All modules have their own module twin, which lets you configure the code that's running inside a module directly from the cloud.
-
- ```csharp
- static Task OnDesiredPropertiesUpdate(TwinCollection desiredProperties, object userContext)
- {
- try
- {
- Console.WriteLine("Desired property change:");
- Console.WriteLine(JsonConvert.SerializeObject(desiredProperties));
-
- if (desiredProperties["TemperatureThreshold"]!=null)
- temperatureThreshold = desiredProperties["TemperatureThreshold"];
-
- }
- catch (AggregateException ex)
- {
- foreach (Exception exception in ex.InnerExceptions)
- {
- Console.WriteLine();
- Console.WriteLine("Error when receiving desired property: {0}", exception);
- }
- }
- catch (Exception ex)
- {
- Console.WriteLine();
- Console.WriteLine("Error when receiving desired property: {0}", ex.Message);
- }
- return Task.CompletedTask;
- }
- ```
-
-1. Replace the **PipeMessage** method with the **FilterMessages** method. This method is called whenever the module receives a message from the IoT Edge hub. It filters out messages that report temperatures below the temperature threshold set via the module twin. It also adds the **MessageType** property to the message with the value set to **Alert**.
-
- ```csharp
- static async Task<MessageResponse> FilterMessages(Message message, object userContext)
- {
- var counterValue = Interlocked.Increment(ref counter);
- try
- {
- ModuleClient moduleClient = (ModuleClient)userContext;
- var messageBytes = message.GetBytes();
- var messageString = Encoding.UTF8.GetString(messageBytes);
- Console.WriteLine($"Received message {counterValue}: [{messageString}]");
-
- // Get the message body.
- var messageBody = JsonConvert.DeserializeObject<MessageBody>(messageString);
-
- if (messageBody != null && messageBody.machine.temperature > temperatureThreshold)
- {
- Console.WriteLine($"Machine temperature {messageBody.machine.temperature} " +
- $"exceeds threshold {temperatureThreshold}");
- using (var filteredMessage = new Message(messageBytes))
- {
- foreach (KeyValuePair<string, string> prop in message.Properties)
- {
- filteredMessage.Properties.Add(prop.Key, prop.Value);
- }
-
- filteredMessage.Properties.Add("MessageType", "Alert");
- await moduleClient.SendEventAsync("output1", filteredMessage);
- }
- }
-
- // Indicate that the message treatment is completed.
- return MessageResponse.Completed;
- }
- catch (AggregateException ex)
- {
- foreach (Exception exception in ex.InnerExceptions)
- {
- Console.WriteLine();
- Console.WriteLine("Error in sample: {0}", exception);
- }
- // Indicate that the message treatment is not completed.
- var moduleClient = (ModuleClient)userContext;
- return MessageResponse.Abandoned;
- }
- catch (Exception ex)
- {
- Console.WriteLine();
- Console.WriteLine("Error in sample: {0}", ex.Message);
- // Indicate that the message treatment is not completed.
- ModuleClient moduleClient = (ModuleClient)userContext;
- return MessageResponse.Abandoned;
- }
- }
- ```
-
-1. Save the ModuleBackgroundService.cs file.
-
-1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
-
-1. Since we changed the name of the endpoint that the module listens on, we also need to update the routes in the deployment manifest so that the edgeHub sends messages to the new endpoint.
-
- Find the **routes** section in the **$edgeHub** module twin. Update the **sensorToCSharpModule** route to replace `input1` with `inputFromSensor`:
-
- ```json
- "sensorToCSharpModule": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/CSharpModule/inputs/inputFromSensor\")"
- ```
-
-1. Add the **CSharpModule** module twin to the deployment manifest. Insert the following JSON content at the bottom of the **modulesContent** section, after the **$edgeHub** module twin:
-
- ```json
- "CSharpModule": {
- "properties.desired":{
- "TemperatureThreshold":25
- }
- }
- ```
-
- ![Add module twin to deployment template](./media/tutorial-csharp-module/module-twin.png)
-
-1. Save the deployment.template.json file.
-
-## Build and push your module
-
-In the previous section, you created an IoT Edge solution and added code to the CSharpModule. The new code filters out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.
-
-1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
-
-1. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
-
- ```bash
- docker login -u <ACR username> -p <ACR password> <ACR login server>
- ```
-
- You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-
-1. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
-
- The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
-
- This process may take several minutes the first time, but is faster the next time that you run the commands.
-
-## Deploy and run the solution
-
-Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
-
-Make sure that your IoT Edge device is up and running.
-
-1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
-
-2. Right-click the name of your IoT Edge device, then select **Create Deployment for Single Device**.
-
-3. Select the **deployment.amd64.json** file in the **config** folder and then click **Select Edge Deployment Manifest**. Do not use the deployment.template.json file.
-
-4. Under your device, expand **Modules** to see a list of deployed and running modules. Click the refresh button. You should see the new **CSharpModule** running along with the **SimulatedTemperatureSensor** module and the **$edgeAgent** and **$edgeHub**.
-
- It may take a few minutes for the modules to start. The IoT Edge runtime needs to receive its new deployment manifest, pull down the module images from the container runtime, then start each new module.
-
-## View generated data
-
-Once you apply the deployment manifest to your IoT Edge device, the IoT Edge runtime on the device collects the new deployment information and starts executing on it. Any modules running on the device that aren't included in the deployment manifest are stopped. Any modules missing from the device are started.
-
-1. In the Visual Studio Code explorer, right-click the name of your IoT Edge device and select **Start Monitoring Built-in Event Endpoint**.
-
-2. View the messages arriving at your IoT Hub. It may take a while for the messages to arrive, because the IoT Edge device has to receive its new deployment and start all the modules. Then, the changes we made to the CModule code wait until the machine temperature reaches 25 degrees before sending messages. It also adds the message type **Alert** to any messages that reach that temperature threshold.
-
- ![View messages arriving at IoT Hub](./media/tutorial-csharp-module/view-d2c-message.png)
-
-## Edit the module twin
-
-We used the CSharpModule module twin in the deployment manifest to set the temperature threshold at 25 degrees. You can use the module twin to change the functionality without having to update the module code.
-
-1. In Visual Studio Code, expand the details under your IoT Edge device to see the running modules.
-
-2. Right-click **CSharpModule** and select **Edit module twin**.
-
-3. Find **TemperatureThreshold** in the desired properties. Change its value to a new temperature 5 degrees to 10 degrees higher than the latest reported temperature.
-
-4. Save the module twin file.
-
-5. Right-click anywhere in the module twin editing pane and select **Update module twin**.
-
-6. Monitor the incoming device-to-cloud messages. You should see the messages stop until the new temperature threshold is reached.
-
-## Clean up resources
-
-If you plan to continue to the next recommended article, you can keep the resources and configurations that you created and reuse them. You can also keep using the same IoT Edge device as a test device.
-
-Otherwise, you can delete the local configurations and the Azure resources that you used in this article to avoid charges.
--
-## Next steps
-
-In this tutorial, you created an IoT Edge module that contains code to filter raw data generated by your IoT Edge device.
-
-You can continue on to the next tutorials to learn how Azure IoT Edge can help you deploy Azure cloud services to process and analyze data at the edge.
-
-> [!div class="nextstepaction"]
-> [Functions](tutorial-deploy-function.md)
-> [Stream Analytics](tutorial-deploy-stream-analytics.md)
-> [Machine Learning](tutorial-deploy-machine-learning.md)
-> [Custom Vision Service](tutorial-deploy-custom-vision.md)
iot-edge Tutorial Develop For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux-on-windows.md
Otherwise, you can delete the local configurations and the Azure resources that
## Next steps
-In this tutorial, you set up Visual Studio on your development machine and deployed and debugged your first IoT Edge module from it. Now that you know the basic concepts, try adding functionality to a module so that it can analyze the data passing through it. Choose your preferred language:
+In this tutorial, you set up Visual Studio on your development machine and deployed and debugged your first IoT Edge module from it. Now that you know the basic concepts, try adding functionality to a module so that it can analyze the data passing through it.
> [!div class="nextstepaction"]
-> [C](tutorial-c-module.md)
-> [C#](tutorial-csharp-module.md)
-> [Java](tutorial-java-module.md)
-> [Node.js](tutorial-node-module.md)
-> [Python](tutorial-python-module.md)
+> [Tutorial: Develop IoT Edge modules with Linux containers](tutorial-develop-for-linux.md)
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
Title: 'Tutorial - Develop module for Linux devices using Azure IoT Edge'
+ Title: Develop module for Linux devices using Azure IoT Edge tutorial
description: This tutorial walks through setting up your development machine and cloud resources to develop IoT Edge modules using Linux containers for Linux devices Previously updated : 07/18/2022 Last updated : 03/31/2023
+zone_pivot_groups: iotedge-dev
# Tutorial: Develop IoT Edge modules with Linux containers [!INCLUDE [iot-edge-version-all-supported](includes/iot-edge-version-all-supported.md)]
-Use [Visual Studio Code](https://code.visualstudio.com/) to develop and deploy code to devices running IoT Edge.
+This tutorial walks through developing and deploying your own code to an IoT Edge device. You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. In the [Deploy code to a Linux device](quickstart-linux.md) quickstart, you created an IoT Edge device and deployed a module from the Azure Marketplace.
-In the [Deploy code to a Linux device](quickstart-linux.md) quickstart, you created an IoT Edge device and deployed a module from the Azure Marketplace. This tutorial walks through developing and deploying your own code to an IoT Edge device. This article is a useful prerequisite for the other tutorials, which go into more detail about specific programming languages or Azure services.
-
-This tutorial uses the example of deploying a **C# module to a Linux device**, the most common developer scenario for IoT Edge solutions. Even if you plan on using a different language or deploying an Azure service, this tutorial is still useful to learn about the development tools and concepts.
+You can choose either the **Azure IoT Edge Dev Tool** command-line tool (CLI) or the **Azure IoT Edge tools for Visual Studio Code** extension as your IoT Edge development tool. Use the tool selector button at the beginning to choose your tool option for this article.
In this tutorial, you learn how to: > [!div class="checklist"] > > * Set up your development machine.
-> * Use the IoT Edge tools for Visual Studio Code to create a new project.
+> * Use the IoT Edge tools to create a new project.
> * Build your project as a [Docker container](/dotnet/architecture/microservices/container-docker-introduction) and store it in an Azure container registry. > * Deploy your code to an IoT Edge device.
+The IoT Edge module that you create in this tutorial filters the temperature data that's generated by your device. It only sends messages upstream if the temperature is above a specified threshold. This type of analysis at the edge is useful for reducing the amount of data that's communicated to and stored in the cloud.
+ ## Prerequisites A development machine: * Use your own computer or a virtual machine.
-* Your development machine must support [nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization) for running a container engine, which you'll install in the next section.
+* Your development machine must support [nested virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization) for running a container engine.
* Most operating systems that can run a container engine can be used to develop IoT Edge modules for Linux devices. This tutorial uses a Windows computer, but points out known differences on macOS or Linux. * Install [Git](https://git-scm.com/), to pull module template packages later in this tutorial.
-* [C# for Visual Studio Code (powered by OmniSharp) extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp).
-* [.NET Core SDK](https://dotnet.microsoft.com/download).
+* Install [Visual Studio Code](https://code.visualstudio.com/)
+* Install the [Azure CLI](/cli/azure/install-azure-cli).
An Azure IoT Edge device:
-* We recommend not to run IoT Edge on your development machine, but instead use a separate device. This distinction between development machine and IoT Edge device simulates a true deployment scenario and helps keep the different concepts straight.
-* If you don't have a second device available, use the quickstart article [Deploy code to a Linux Device](quickstart-linux.md) to create an IoT Edge device in Azure.
+* You should run IoT Edge on a separate device. This distinction between development machine and IoT Edge device simulates a true deployment scenario and helps keep the different concepts separate.
+Use the quickstart article [Deploy code to a Linux Device](quickstart-linux.md) to create an IoT Edge device in Azure.
Cloud resources:
Cloud resources:
This tutorial walks through the development of an IoT Edge module. An *IoT Edge module*, or sometimes just *module* for short, is a container with executable code. You can deploy one or more modules to an IoT Edge device. Modules perform specific tasks like ingesting data from sensors, cleaning and analyzing data, or sending messages to an IoT hub. For more information, see [Understand Azure IoT Edge modules](iot-edge-modules.md).
-When developing IoT Edge modules, it's important to understand the difference between the development machine and the target IoT Edge device where the module eventually deploys. The container that you build to hold your module code must match the operating system (OS) of the *target device*. For example, the most common scenario is someone developing a module on a Windows computer intending to target a Linux device running IoT Edge. In that case, the container operating system would be Linux. As you go through this tutorial, keep in mind the difference between the *development machine OS* and the *container OS*.
+When developing IoT Edge modules, it's important to understand the difference between the development machine and the target IoT Edge device where the module deploys. The container that you build to hold your module code must match the operating system (OS) of the *target device*. For example, the most common scenario is someone developing a module on a Windows computer intending to target a Linux device running IoT Edge. In that case, the container operating system would be Linux. As you go through this tutorial, keep in mind the difference between the *development machine OS* and the *container OS*.
>[!TIP] >If you're using [IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md), then the *target device* in your scenario is the Linux virtual machine, not the Windows host.
The following table lists the supported development scenarios for **Linux contai
| | Visual Studio Code | Visual Studio 2019/2022 | | - | | |
-| **Linux device architecture** | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 |
+| **Linux device architecture** | Linux AMD64 <br> Linux ARM32v7 <br> Linux ARM64 | Linux AMD64 <br> Linux ARM32 <br> Linux ARM64 |
| **Azure services** | Azure Functions <br> Azure Stream Analytics <br> Azure Machine Learning | | | **Languages** | C <br> C# <br> Java <br> Node.js <br> Python | C <br> C# |
-| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) <br> [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit)| [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) <br> [Azure IoT Edge Tools for Visual Studio 2022](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs17iotedgetools) |
+| **More information** | [Azure IoT Edge for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) | [Azure IoT Edge Tools for Visual Studio 2019](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs16iotedgetools) <br> [Azure IoT Edge Tools for Visual Studio 2022](https://marketplace.visualstudio.com/items?itemName=vsc-iot.vs17iotedgetools) |
## Install container engine
-IoT Edge modules are packaged as containers, so you need a [Docker compatible container management system](support.md#container-engines) on your development machine to build and manage them. We recommend Docker Desktop for development because of its feature support and popularity. Docker Desktop on Windows lets you switch between Linux containers and Windows containers so that you can easily develop modules for different types of IoT Edge devices.
+IoT Edge modules are packaged as containers, so you need a [Docker compatible container management system](support.md#container-engines) on your development machine to build and manage them. We recommend Docker Desktop for development because of its feature support and popularity. Docker Desktop on Windows lets you switch between Linux containers and Windows containers so that you can develop modules for different types of IoT Edge devices.
Use the Docker documentation to install on your development machine: * [Install Docker Desktop for Windows](https://docs.docker.com/docker-for-windows/install/)
- * When you install Docker Desktop for Windows, you're asked whether you want to use Linux or Windows containers. You can change this decision at any time, using an easy switch. For this tutorial, we use Linux containers because our modules are targeting Linux devices. For more information, see [Switch between Windows and Linux containers](https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers).
+ * When you install Docker Desktop for Windows, you're asked whether you want to use Linux or Windows containers. You can change this decision at any time. For this tutorial, we use Linux containers because our modules are targeting Linux devices. For more information, see [Switch between Windows and Linux containers](https://docs.docker.com/docker-for-windows/#switch-between-windows-and-linux-containers).
* [Install Docker Desktop for Mac](https://docs.docker.com/docker-for-mac/install/) * Read [About Docker CE](https://docs.docker.com/install/) for installation information on several Linux platforms. * For the Windows Subsystem for Linux (WSL), install Docker Desktop for Windows.
-## Set up Visual Studio Code and tools
+## Set up tools
++
+* Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) to debug, run, and test your IoT Edge solution. [Python 3.6 or 3.7](https://www.python.org/downloads/) and [Pip3](https://pip.pypa.io/en/stable/installation/) are required for the *Azure IoT Edge Tool*. Install the prerequisites first if needed.
+
+ ```bash
+ pip3 install iotedgedev
+ ```
+
+ > [!NOTE]
+ >
+ > If you have multiple Python versions, including pre-installed Python 2.7 (for example, on Ubuntu or macOS), make sure you use `pip3` to install *IoT Edge Dev Tool (iotedgedev)*.
+ >
+ > For more information setting up your development machine, see [iotedgedev development setup](https://github.com/Azure/iotedgedev/blob/main/docs/environment-setup/manual-dev-machine-setup.md).
++
-Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These extensions provide project templates, automate the creation of the deployment manifest, and allow you to monitor and manage IoT Edge devices. In this section, you install Visual Studio Code and the IoT extension, then set up your Azure account to manage IoT Hub resources from within Visual Studio Code.
+Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These extensions offer project templates, automate the creation of the deployment manifest, and allow you to monitor and manage IoT Edge devices. In this section, you install Visual Studio Code and the IoT extension, then set up your Azure account to manage IoT Hub resources from within Visual Studio Code.
-1. Install [Visual Studio Code](https://code.visualstudio.com/) on your development machine.
+1. Install [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension.
-2. Once the installation finishes, open Visual Studio Code and select **View** > **Extensions**.
+1. Install [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extension.
-3. Search for **Azure IoT Edge** and **Azure IoT Hub**, which are extensions that help you interact with IoT Hub and IoT devices, as well as developing IoT Edge modules.
+1. After you install extensions, open the command palette by selecting **View** > **Command Palette**.
-4. On each extension, select **Install**.
+1. In the command palette again, search for and select **Azure IoT Hub: Select IoT Hub**. Follow the prompts to select your Azure subscription and IoT Hub.
-5. After you install extensions, open the command palette by selecting **View** > **Command Palette**.
+1. Open the explorer section of Visual Studio Code by either selecting the icon in the activity bar on the left, or by selecting **View** > **Explorer**.
-6. In the command palette, search for and select **Azure: Sign in**. Follow the prompts to sign in to your Azure account.
+1. At the bottom of the explorer section, expand the collapsed **Azure IoT Hub / Devices** menu. You should see the devices and IoT Edge devices associated with the IoT Hub that you selected through the command palette.
-7. In the command palette again, search for and select **Azure IoT Hub: Select IoT Hub**. Follow the prompts to select your Azure subscription and IoT hub.
+ :::image type="content" source="./media/tutorial-develop-for-linux/view-iot-hub-devices.png" alt-text="Screenshot that shows your devices in the Azure I o T Hub section of the Explorer menu.":::
-8. Open the explorer section of Visual Studio Code by either selecting the icon in the activity bar on the left, or by selecting **View** > **Explorer**.
-9. At the bottom of the explorer section, expand the collapsed **Azure IoT Hub / Devices** menu. You should see the devices and IoT Edge devices associated with the IoT hub that you selected through the command palette.
+### Install language specific tools
+
+Install tools specific to the language you're developing in:
+
+# [C\#](#tab/csharp)
+
+* [.NET Core SDK](https://dotnet.microsoft.com/download)
+* [C# Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
+
+# [C](#tab/c)
+
+* [C/C++ Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools)
+* Installing the Azure IoT C SDK isn't required for this tutorial, but can provide helpful functionality like intellisense and reading program definitions. For installation information, see [Azure IoT C SDKs and Libraries](https://github.com/Azure/azure-iot-sdk-c).
+
+# [Java](#tab/java)
+
+* [Java SE Development Kit 11](/azure/developer/java/fundamentals/java-support-on-azure) and [Maven](https://maven.apache.org/). You need to [set the `JAVA_HOME` environment variable](https://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/) to point to your JDK installation.
+* [Maven](https://maven.apache.org/)
+* [Java Extension Pack for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-pack)
+
+>[!TIP]
+>The Java and Maven installation processes add environment variables to your system. Restart any open Visual Studio Code terminal, PowerShell, or command prompt instances after completing installation. This step ensures that these utilities can recognize the Java and Maven commands going forward.
+# [Node.js](#tab/node)
+* [Node.js](https://nodejs.org).
+* [Yeoman](https://www.npmjs.com/package/yo)
+* [Azure IoT Edge Node.js Module Generator](https://www.npmjs.com/package/generator-azure-iot-edge-module).
+
+# [Python](#tab/python)
+
+To develop an IoT Edge module in Python, install the following additional prerequisites on your development machine:
+
+* [Python](https://www.python.org/downloads/) and [Pip](https://pip.pypa.io/en/stable/installation/).
+* [Cookiecutter](https://github.com/audreyr/cookiecutter).
+* [Python extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-python.python).
+
+>[!Note]
+>Ensure that your `bin` folder is on your path for your platform. Typically `~/.local/` for UNIX and macOS, or `%APPDATA%\Python` on Windows.
++ [!INCLUDE [iot-edge-create-container-registry](includes/iot-edge-create-container-registry.md)] ## Create a new module project
-The Azure IoT Edge extension provides project templates for all supported IoT Edge module languages in Visual Studio Code. These templates have all the files and code that you need to deploy a working module to test IoT Edge, or give you a starting point to customize the template with your own business logic.
-
-For this tutorial, we use the C# module template because it's the most commonly used template.
+The Azure IoT Edge extension offers project templates for all supported IoT Edge module languages in Visual Studio Code. These templates have all the files and code that you need to deploy a working module to test IoT Edge, or give you a starting point to customize the template with your own business logic.
### Create a project template
-In the Visual Studio Code command palette, search for and select **Azure IoT Edge: New IoT Edge Solution**. Follow the prompts to create your solution:
-1. Select folder: choose the location on your development machine for Visual Studio Code to create the solution files.
-1. Provide a solution name: enter a descriptive name for your solution or accept the default **EdgeSolution**.
-1. Select a module template: choose **C# Module**.
-1. Provide a module name: accept the default **SampleModule**.
-1. Provide Docker image repository for the module: an image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the **Login server** value from the Overview page of your container registry in the Azure portal.
+The [IoT Edge Dev Tool](https://github.com/Azure/iotedgedev) simplifies Azure IoT Edge development to commands driven by environment variables. It gets you started with IoT Edge development with the IoT Edge Dev Container and IoT Edge solution scaffolding that has a default module and all the required configuration files.
- The final image repository looks like:
-
- \<registry name\>.azurecr.io/samplemodule.
+1. Create a directory for your solution with the path of your choice. Change into your `iotedgesolution` directory.
- :::image type="content" source="./media/tutorial-develop-for-linux/image-repository.png" alt-text="Screenshot showing where to provide a Docker image repository in the command palette.":::
+ ```bash
+ mkdir c:\dev\iotedgesolution
+ ```
-Once your new solution loads in the Visual Studio Code window, take a moment to familiarize yourself with the files that it created:
+1. Use the **iotedgedev solution init** command to create a solution and set up your Azure IoT Hub in the development language of your choice.
-* The **.vscode** folder contains a file called **launch.json**, which you use for debugging modules.
-* The **modules** folder contains a folder for each module in your solution. Right now, that should only be **SampleModule**, or whatever name you gave to the module. The SampleModule folder contains the main program code, the module metadata, and several Docker files.
-* The **.env** file holds the credentials to your container registry. Your IoT Edge device also has access to these credentials so it can pull the container images.
-* The **deployment.debug.template.json** file and **deployment.template.json** file are templates that help you create a deployment manifest. A *deployment manifest* is a file that defines exactly which modules you want deployed on a device. The manifest also determines how you want to configure the modules and how they communicate with each other and the cloud. The template files use pointers for some values. When you transform the template into a true deployment manifest, the pointers replace values taken from other solution files.
-* Open the **deployment.template.json** file and locate two common placeholders:
- * In the `registryCredentials` section, the auto-filled address has information you provided when you created the solution. However, the username and password reference the variables stored in the .env file. This configuration is for security, as the .env file is git ignored, but the deployment template isn't.
- * In the `SampleModule` section, the container image isn't auto-filled even though you provided the image repository when you created the solution. This placeholder points to the **module.json** file inside the SampleModule folder. If you go to that file, you see that the image field does contain the repository, but also a tag value that contains the version and the platform of the container. You can iterate the version manually as part of your development cycle, and you select the container platform using a switcher that we introduce later in this section.
+ # [C\#](#tab/csharp)
+
+ ```bash
+ iotedgedev solution init --template csharp
+ ```
+
+ # [C](#tab/c)
+
+ ```bash
+ iotedgedev solution init --template c
+ ```
-### Set IoT Edge runtime version
+ # [Java](#tab/java)
+
+ ```bash
+ iotedgedev solution init --template java
+ ```
+
+ # [Node.js](#tab/node)
+
+ ```bash
+ iotedgedev solution init --template nodejs
+ ```
+
+ # [Python](#tab/python)
+
+ ```bash
+ iotedgedev solution init --template python
+ ```
+
+
+
+The *iotedgedev solution init* script prompts you to complete several steps including:
+
+* Authenticate to Azure
+* Choose an Azure subscription
+* Choose or create a resource group
+* Choose or create an Azure IoT Hub
+* Choose or create an Azure IoT Edge device
++
-The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is 1.4. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio Code to match.
+Use Visual Studio Code and the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) extension. You start by creating a solution, and then generating the first module in that solution. Each solution can contain multiple modules.
1. Select **View** > **Command Palette**.
+1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge Solution**.
+1. Browse to the folder where you want to create the new solution and then select **Select folder**.
+1. Enter a name for your solution.
+1. Select a module template for your preferred development language to be the first module in the solution.
+1. Enter a name for your module. Choose a name that's unique within your container registry.
+1. Provide the name of the module's image repository. Visual Studio Code autopopulates the module name with **localhost:5000/<your module name\>**. Replace it with your own registry information. Use **localhost** if you use a local Docker registry for testing. If you use Azure Container Registry, then use **Login server** from your registry's settings. The sign-in server looks like **_\<registry name\>_.azurecr.io**. Only replace the **localhost:5000** part of the string so that the final result looks like **\<*registry name*\>.azurecr.io/_\<your module name\>_**.
-1. In the command palette, enter and run the command **Azure IoT Edge: Set default IoT Edge runtime version**.
+ :::image type="content" source="./media/how-to-develop-csharp-module/repository.png" alt-text="Screenshot of how to provide a Docker image repository." lightbox="./media/how-to-develop-csharp-module/repository.png":::
-1. Choose the runtime version that your IoT Edge devices are running from the list.
+Visual Studio Code takes the information you provided, creates an IoT Edge solution, and then loads it in a new window.
-After you select a new runtime version, your deployment manifest becomes dynamically updated to reflect the change to the runtime module images.
+
+After solution creation, these main files are in the solution:
+
+- A **.vscode** folder contains configuration file *launch.json*.
+- A **modules** folder that has subfolders for each module. Within the subfolder for each module, the module.json file controls how modules are built and deployed.
+- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost:5000* by default.
+
+- Two module deployment files named **deployment.template.json** and **deployment.debug.template.json** list the modules to deploy to your device. By default, the list includes the IoT Edge system modules (edgeAgent and edgeHub) and sample modules such as:
+ - **filtermodule** is a sample module that implements a simple filter function.
+ - **SimulatedTemperatureSensor** module that simulates data you can use for testing. For more information about how deployment manifests work, see [Learn how to use deployment manifests to deploy modules and establish routes](module-composition.md). For more information on how the simulated temperature module works, see the [SimulatedTemperatureSensor.csproj source code](https://github.com/Azure/iotedge/tree/master/edge-modules/SimulatedTemperatureSensor).
+
+ > [!NOTE]
+ > The exact modules installed may depend on your language of choice.
++
+### Set IoT Edge runtime version
+
+The latest stable IoT Edge system module version is 1.4. Set your system modules to version 1.4.
+
+1. In Visual Studio Code, open **deployment.template.json** deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) is a JSON document that describes the modules to be configured on the targeted IoT Edge device.
+1. Change the runtime version for the system runtime module images **edgeAgent** and **edgeHub**. For example, if you want to use the IoT Edge runtime version 1.4, change the following lines in the deployment manifest file:
+
+ ```json
+ "systemModules": {
+ "edgeAgent": {
+
+ "image": "mcr.microsoft.com/azureiotedge-agent:1.4",
+
+ "edgeHub": {
+
+ "image": "mcr.microsoft.com/azureiotedge-hub:1.4",
+ ```
+ ### Provide your registry credentials to the IoT Edge agent The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your container images onto the IoT Edge device.
->[!NOTE]
->If you didn't replace the **localhost:5000** value with the login server value from your Azure container registry, in the [**Create a project template**](#create-a-project-template) step, the **.env** file and the `registryCredentials` section of the deployment manifest will be missing. If that section is missing, return to the **Provide Docker image repository for the module** step in the **Create a project template** section to see how to replace the **localhost:5000** value.
+The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file.
+
+> [!NOTE]
+> The environment file is only created if you provide an image repository for the module. If you accepted the localhost defaults to test and debug locally, then you don't need to declare environment variables.
-The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials exist. If not, add them now:
+Check to see if your credentials exist. If not, add them now:
+1. If Azure Container Registry is your registry, set an Azure Container Registry username and password. Get these values from your container registry's **Settings** > **Access keys** menu in the Azure portal.
1. Open the **.env** file in your module solution.
-2. Add the **username** and **password** values that you copied from your Azure container registry.
-3. Save your changes to the .env file.
+1. Add the **username** and **password** values that you copied from your Azure container registry.
+ For example:
+
+ ```env
+ CONTAINER_REGISTRY_SERVER="myacr.azurecr.io"
+ CONTAINER_REGISTRY_USERNAME="myacr"
+ CONTAINER_REGISTRY_PASSWORD="<registry_password>"
+ ```
+1. Save your changes to the *.env* file.
+
+> [!NOTE]
+> This tutorial uses administrator login credentials for Azure Container Registry that are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals or repository-scoped tokens. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
+
+### Target architecture
+
+You need to select the architecture you're targeting with each solution, because that affects how the container is built and runs. The default is Linux AMD64. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device and keep the default **amd64**.
->[!NOTE]
->This tutorial uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals or repository-scoped tokens. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
+If you need to change the target architecture for your solution, use the following steps.
-### Select your target architecture
-Currently, Visual Studio Code can develop C# modules for Linux AMD64 and ARM32v7 devices. You need to select which architecture you're targeting with each solution, because that affects how the container gets built and runs. The default is Linux AMD64.
+1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
-1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon at the bottom of the window.
+1. In the command palette, select the target architecture from the list of options.
- :::image type="content" source="./media/tutorial-develop-for-linux/select-architecture.png" alt-text="Screenshot showing the location of the architecture icon at the bottom of the Visual Studio Code window." lightbox="./media/tutorial-develop-for-linux/select-architecture.png":::
-2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so we keep the default **amd64**.
-### Review the sample code
+1. Open or create **settings.json** in the **.vscode** directory of your solution.
-The solution template that you created includes sample code for an IoT Edge module. This sample module simply receives messages and then passes them on. The pipeline functionality demonstrates an important concept in IoT Edge, which is how modules communicate with each other.
+1. Change the *platform* value to `amd64`, `arm32v7`, `arm64v8`, or `windows-amd64`. For example:
+
+ ```json
+ {
+ "azure-iot-edge.defaultPlatform": {
+ "platform": "amd64",
+ "alias": null
+ }
+ }
+ ```
++
+### Update module with custom code
+
+Each template includes sample code that takes simulated sensor data from the **SimulatedTemperatureSensor** module and routes it to the IoT hub. The sample module receives messages and then passes them on. The pipeline functionality demonstrates an important concept in IoT Edge, which is how modules communicate with each other.
Each module can have multiple *input* and *output* queues declared in their code. The IoT Edge hub running on the device routes messages from the output of one module into the input of one or more modules. The specific code for declaring inputs and outputs varies between languages, but the concept is the same across all modules. For more information about routing between modules, see [Declare routes](module-composition.md#declare-routes).
+# [C\#](#tab/csharp)
+ The sample C# code that comes with the project template uses the [ModuleClient Class](/dotnet/api/microsoft.azure.devices.client.moduleclient) from the IoT Hub SDK for .NET.
-1. Open the **ModuleBackgroundService.cs** file, which is inside the **modules/SampleModule/** folder.
+1. In the Visual Studio Code explorer, open **modules** > **CSharpModule** > **ModuleBackgroundService.cs**.
+
+1. At the top of the **CSharpModule** namespace, add three **using** statements for types that are used later:
+
+ ```csharp
+ using System.Collections.Generic; // For KeyValuePair<>
+ using Microsoft.Azure.Devices.Shared; // For TwinCollection
+ using Newtonsoft.Json; // For JsonConvert
+ ```
+
+1. Add the **temperatureThreshold** variable to the **ModuleBackgroundService** class. This variable sets the value that the measured temperature must exceed for the data to be sent to the IoT hub.
+
+ ```csharp
+ static int temperatureThreshold { get; set; } = 25;
+ ```
+
+1. Add the **MessageBody**, **Machine**, and **Ambient** classes to the **ModuleBackgroundService** class. These classes define the expected schema for the body of incoming messages.
+
+ ```csharp
+ class MessageBody
+ {
+ public Machine machine {get;set;}
+ public Ambient ambient {get; set;}
+ public string timeCreated {get; set;}
+ }
+ class Machine
+ {
+ public double temperature {get; set;}
+ public double pressure {get; set;}
+ }
+ class Ambient
+ {
+ public double temperature {get; set;}
+ public int humidity {get; set;}
+ }
+ ```
+
+1. Find the **Init** function. This function creates and configures a **ModuleClient** object, which allows the module to connect to the local Azure IoT Edge runtime to send and receive messages. After creating the **ModuleClient**, the code reads the **temperatureThreshold** value from the module twin's desired properties. The code registers a callback to receive messages from an IoT Edge hub via an endpoint called **input1**.
+
+ Replace the **SetInputMessageHandlerAsync** method with a new one that updates the name of the endpoint and the method that's called when input arrives. Also, add a **SetDesiredPropertyUpdateCallbackAsync** method for updates to the desired properties. To make this change, replace the last line of the **Init** method with the following code:
+
+ ```csharp
+ // Register a callback for messages that are received by the module.
+ // await ioTHubModuleClient.SetInputMessageHandlerAsync("input1", PipeMessage, iotHubModuleClient);
+
+ // Read the TemperatureThreshold value from the module twin's desired properties
+ var moduleTwin = await ioTHubModuleClient.GetTwinAsync();
+ await OnDesiredPropertiesUpdate(moduleTwin.Properties.Desired, ioTHubModuleClient);
+
+ // Attach a callback for updates to the module twin's desired properties.
+ await ioTHubModuleClient.SetDesiredPropertyUpdateCallbackAsync(OnDesiredPropertiesUpdate, null);
+
+ // Register a callback for messages that are received by the module. Messages received on the inputFromSensor endpoint are sent to the FilterMessages method.
+ await ioTHubModuleClient.SetInputMessageHandlerAsync("inputFromSensor", FilterMessages, ioTHubModuleClient);
+ ```
+
+1. Add the **onDesiredPropertiesUpdate** method to the **Program** class. This method receives updates on the desired properties from the module twin, and updates the **temperatureThreshold** variable to match. All modules have their own module twin, which lets you configure the code that's running inside a module directly from the cloud.
+
+ ```csharp
+ static Task OnDesiredPropertiesUpdate(TwinCollection desiredProperties, object userContext)
+ {
+ try
+ {
+ Console.WriteLine("Desired property change:");
+ Console.WriteLine(JsonConvert.SerializeObject(desiredProperties));
+
+ if (desiredProperties["TemperatureThreshold"]!=null)
+ temperatureThreshold = desiredProperties["TemperatureThreshold"];
+
+ }
+ catch (AggregateException ex)
+ {
+ foreach (Exception exception in ex.InnerExceptions)
+ {
+ Console.WriteLine();
+ Console.WriteLine("Error when receiving desired property: {0}", exception);
+ }
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine();
+ Console.WriteLine("Error when receiving desired property: {0}", ex.Message);
+ }
+ return Task.CompletedTask;
+ }
+ ```
+
+1. Replace the **PipeMessage** method with the **FilterMessages** method. This method is called whenever the module receives a message from the IoT Edge hub. It filters out messages that report temperatures below the temperature threshold set via the module twin. It also adds the **MessageType** property to the message with the value set to **Alert**.
+
+ ```csharp
+ static async Task<MessageResponse> FilterMessages(Message message, object userContext)
+ {
+ var counterValue = Interlocked.Increment(ref counter);
+ try
+ {
+ ModuleClient moduleClient = (ModuleClient)userContext;
+ var messageBytes = message.GetBytes();
+ var messageString = Encoding.UTF8.GetString(messageBytes);
+ Console.WriteLine($"Received message {counterValue}: [{messageString}]");
+
+ // Get the message body.
+ var messageBody = JsonConvert.DeserializeObject<MessageBody>(messageString);
+
+ if (messageBody != null && messageBody.machine.temperature > temperatureThreshold)
+ {
+ Console.WriteLine($"Machine temperature {messageBody.machine.temperature} " +
+ $"exceeds threshold {temperatureThreshold}");
+ using (var filteredMessage = new Message(messageBytes))
+ {
+ foreach (KeyValuePair<string, string> prop in message.Properties)
+ {
+ filteredMessage.Properties.Add(prop.Key, prop.Value);
+ }
+
+ filteredMessage.Properties.Add("MessageType", "Alert");
+ await moduleClient.SendEventAsync("output1", filteredMessage);
+ }
+ }
+
+ // Indicate that the message treatment is completed.
+ return MessageResponse.Completed;
+ }
+ catch (AggregateException ex)
+ {
+ foreach (Exception exception in ex.InnerExceptions)
+ {
+ Console.WriteLine();
+ Console.WriteLine("Error in sample: {0}", exception);
+ }
+ // Indicate that the message treatment is not completed.
+ var moduleClient = (ModuleClient)userContext;
+ return MessageResponse.Abandoned;
+ }
+ catch (Exception ex)
+ {
+ Console.WriteLine();
+ Console.WriteLine("Error in sample: {0}", ex.Message);
+ // Indicate that the message treatment is not completed.
+ ModuleClient moduleClient = (ModuleClient)userContext;
+ return MessageResponse.Abandoned;
+ }
+ }
+ ```
+
+1. Save the **ModuleBackgroundService.cs** file.
+
+1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+
+1. Since we changed the name of the endpoint that the module listens on, we also need to update the routes in the deployment manifest so that the edgeHub sends messages to the new endpoint.
+
+ Find the **routes** section in the **$edgeHub** module twin. Update the **sensorToCSharpModule** route to replace `input1` with `inputFromSensor`:
+
+ ```json
+ "sensorToCSharpModule": "FROM /messages/modules/SimulatedTemperatureSensor/outputs/temperatureOutput INTO BrokeredEndpoint(\"/modules/CSharpModule/inputs/inputFromSensor\")"
+ ```
+
+1. Add the **CSharpModule** module twin to the deployment manifest. Insert the following JSON content at the bottom of the **modulesContent** section, after the **$edgeHub** module twin:
+
+ ```json
+ "CSharpModule": {
+ "properties.desired":{
+ "TemperatureThreshold":25
+ }
+ }
+ ```
+
+1. Save the **deployment.template.json** file.
+
+# [C](#tab/c)
+
+1. The data from the sensor in this scenario comes in JSON format. To filter messages in JSON format, import a JSON library for C. This tutorial uses Parson.
+
+ 1. Download the [Parson GitHub repository](https://github.com/kgabis/parson). Copy the **parson.c** and **parson.h** files into the **filtermodule** folder.
+
+ 1. Open **modules** > **filtermodule** > **CMakeLists.txt**. At the top of the file, import the Parson files as a library called **my_parson**.
+
+ ```txt
+ add_library(my_parson
+ parson.c
+ parson.h
+ )
+ ```
+
+ 1. Add `my_parson` to the list of libraries in the **target_link_libraries** function of CMakeLists.txt.
+
+ 1. Save the **CMakeLists.txt** file.
+
+ 1. Open **modules** > **filtermodule** > **main.c**. At the bottom of the list of include statements, add a new one to include `parson.h` for JSON support:
+
+ ```c
+ #include "parson.h"
+ ```
+
+1. In the **main.c** file, add a global variable called `temperatureThreshold` after the include section. This variable sets the value that the measured temperature must exceed in order for the data to be sent to IoT Hub.
+
+ ```c
+ static double temperatureThreshold = 25;
+ ```
+
+1. Find the `CreateMessageInstance` function in main.c. Replace the inner if-else statement with the following code that adds a few lines of functionality:
+
+ ```c
+ if ((messageInstance->messageHandle = IoTHubMessage_Clone(message)) == NULL)
+ {
+ free(messageInstance);
+ messageInstance = NULL;
+ }
+ else
+ {
+ messageInstance->messageTrackingId = messagesReceivedByInput1Queue;
+ MAP_HANDLE propMap = IoTHubMessage_Properties(messageInstance->messageHandle);
+ if (Map_AddOrUpdate(propMap, "MessageType", "Alert") != MAP_OK)
+ {
+ printf("ERROR: Map_AddOrUpdate Failed!\r\n");
+ }
+ }
+ ```
+
+ The new lines of code in the else statement add a new property to the message, which labels the message as an alert. This code labels all messages as alerts, because we'll add functionality that only sends messages to IoT Hub if they report high temperatures.
+
+1. Replace the entire `InputQueue1Callback` function with the following code. This function implements the actual messaging filter. When a message is received, it checks whether the reported temperature exceeds the threshold. If yes, then it forwards the message through its output queue. If not, then it ignores the message.
+
+ ```c
+ static unsigned char *bytearray_to_str(const unsigned char *buffer, size_t len)
+ {
+ unsigned char *ret = (unsigned char *)malloc(len + 1);
+ memcpy(ret, buffer, len);
+ ret[len] = '\0';
+ return ret;
+ }
+
+ static IOTHUBMESSAGE_DISPOSITION_RESULT InputQueue1Callback(IOTHUB_MESSAGE_HANDLE message, void* userContextCallback)
+ {
+ IOTHUBMESSAGE_DISPOSITION_RESULT result;
+ IOTHUB_CLIENT_RESULT clientResult;
+ IOTHUB_MODULE_CLIENT_LL_HANDLE iotHubModuleClientHandle = (IOTHUB_MODULE_CLIENT_LL_HANDLE)userContextCallback;
+
+ unsigned const char* messageBody;
+ size_t contentSize;
+
+ if (IoTHubMessage_GetByteArray(message, &messageBody, &contentSize) == IOTHUB_MESSAGE_OK)
+ {
+ messageBody = bytearray_to_str(messageBody, contentSize);
+ } else
+ {
+ messageBody = "<null>";
+ }
+
+ printf("Received Message [%zu]\r\n Data: [%s]\r\n",
+ messagesReceivedByInput1Queue, messageBody);
+
+ // Check if the message reports temperatures higher than the threshold
+ JSON_Value *root_value = json_parse_string(messageBody);
+ JSON_Object *root_object = json_value_get_object(root_value);
+ double temperature;
+ if (json_object_dotget_value(root_object, "machine.temperature") != NULL && (temperature = json_object_dotget_number(root_object, "machine.temperature")) > temperatureThreshold)
+ {
+ printf("Machine temperature %f exceeds threshold %f\r\n", temperature, temperatureThreshold);
+ // This message should be sent to next stop in the pipeline, namely "output1". What happens at "outpu1" is determined
+ // by the configuration of the Edge routing table setup.
+ MESSAGE_INSTANCE *messageInstance = CreateMessageInstance(message);
+ if (NULL == messageInstance)
+ {
+ result = IOTHUBMESSAGE_ABANDONED;
+ }
+ else
+ {
+ printf("Sending message (%zu) to the next stage in pipeline\n", messagesReceivedByInput1Queue);
+
+ clientResult = IoTHubModuleClient_LL_SendEventToOutputAsync(iotHubModuleClientHandle, messageInstance->messageHandle, "output1", SendConfirmationCallback, (void *)messageInstance);
+ if (clientResult != IOTHUB_CLIENT_OK)
+ {
+ IoTHubMessage_Destroy(messageInstance->messageHandle);
+ free(messageInstance);
+ printf("IoTHubModuleClient_LL_SendEventToOutputAsync failed on sending msg#=%zu, err=%d\n", messagesReceivedByInput1Queue, clientResult);
+ result = IOTHUBMESSAGE_ABANDONED;
+ }
+ else
+ {
+ result = IOTHUBMESSAGE_ACCEPTED;
+ }
+ }
+ }
+ else
+ {
+ printf("Not sending message (%zu) to the next stage in pipeline.\r\n", messagesReceivedByInput1Queue);
+ result = IOTHUBMESSAGE_ACCEPTED;
+ }
+
+ messagesReceivedByInput1Queue++;
+ return result;
+ }
+ ```
+
+1. Add a `moduleTwinCallback` function. This method receives updates on the desired properties from the module twin, and updates the **temperatureThreshold** variable to match. All modules have their own module twin, which lets you configure the code running inside a module directly from the cloud.
+
+ ```c
+ static void moduleTwinCallback(DEVICE_TWIN_UPDATE_STATE update_state, const unsigned char* payLoad, size_t size, void* userContextCallback)
+ {
+ printf("\r\nTwin callback called with (state=%s, size=%zu):\r\n%s\r\n",
+ MU_ENUM_TO_STRING(DEVICE_TWIN_UPDATE_STATE, update_state), size, payLoad);
+ JSON_Value *root_value = json_parse_string(payLoad);
+ JSON_Object *root_object = json_value_get_object(root_value);
+ if (json_object_dotget_value(root_object, "desired.TemperatureThreshold") != NULL) {
+ temperatureThreshold = json_object_dotget_number(root_object, "desired.TemperatureThreshold");
+ }
+ if (json_object_get_value(root_object, "TemperatureThreshold") != NULL) {
+ temperatureThreshold = json_object_get_number(root_object, "TemperatureThreshold");
+ }
+ }
+ ```
+
+1. Find the `SetupCallbacksForModule` function. Replace the function with the following code that adds an **else if** statement to check if the module twin has been updated.
+
+ ```c
+ static int SetupCallbacksForModule(IOTHUB_MODULE_CLIENT_LL_HANDLE iotHubModuleClientHandle)
+ {
+ int ret;
+
+ if (IoTHubModuleClient_LL_SetInputMessageCallback(iotHubModuleClientHandle, "input1", InputQueue1Callback, (void*)iotHubModuleClientHandle) != IOTHUB_CLIENT_OK)
+ {
+ printf("ERROR: IoTHubModuleClient_LL_SetInputMessageCallback(\"input1\")..........FAILED!\r\n");
+ ret = MU_FAILURE;
+ }
+ else if (IoTHubModuleClient_LL_SetModuleTwinCallback(iotHubModuleClientHandle, moduleTwinCallback, (void*)iotHubModuleClientHandle) != IOTHUB_CLIENT_OK)
+ {
+ printf("ERROR: IoTHubModuleClient_LL_SetModuleTwinCallback(default)..........FAILED!\r\n");
+ ret = MU_FAILURE;
+ }
+ else
+ {
+ ret = 0;
+ }
+
+ return ret;
+ }
+ ```
+
+1. Save the **main.c** file.
-2. In **ModuleBackgroundService.cs**, find the **SetInputMessageHandlerAsync** method.
+1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+
+1. Add the filtermodule module twin to the deployment manifest. Insert the following JSON content at the bottom of the `moduleContent` section, after the `$edgeHub` module twin:
+
+ ```json
+ "filtermodule": {
+ "properties.desired":{
+ "TemperatureThreshold":25
+ }
+ }
+ ```
+
+1. Save the **deployment.template.json** file.
+
+# [Java](#tab/java)
+
+1. In the Visual Studio Code explorer, open **modules** > **filtermodule** > **src** > **main** > **java** > **com** > **edgemodule** > **App.java**.
+
+1. Add the following code at the top of the file to import new referenced classes.
+
+ ```java
+ import java.io.StringReader;
+ import java.util.concurrent.atomic.AtomicLong;
+ import java.util.HashMap;
+ import java.util.Map;
+
+ import javax.json.Json;
+ import javax.json.JsonObject;
+ import javax.json.JsonReader;
+
+ import com.microsoft.azure.sdk.iot.device.DeviceTwin.Pair;
+ import com.microsoft.azure.sdk.iot.device.DeviceTwin.Property;
+ import com.microsoft.azure.sdk.iot.device.DeviceTwin.TwinPropertyCallBack;
+ ```
+
+1. Add the following definition into class **App**. This variable sets a temperature threshold. The measured machine temperature won't be reported to IoT Hub until it goes over this value.
+
+ ```java
+ private static final String TEMP_THRESHOLD = "TemperatureThreshold";
+ private static AtomicLong tempThreshold = new AtomicLong(25);
+ ```
+
+1. Replace the execute method of **MessageCallbackMqtt** with the following code. This method is called whenever the module receives an MQTT message from the IoT Edge hub. It filters out messages that report temperatures below the temperature threshold set via the module twin.
+
+ ```java
+ protected static class MessageCallbackMqtt implements MessageCallback {
+ private int counter = 0;
+ @Override
+ public IotHubMessageResult execute(Message msg, Object context) {
+ this.counter += 1;
+
+ String msgString = new String(msg.getBytes(), Message.DEFAULT_IOTHUB_MESSAGE_CHARSET);
+ System.out.println(
+ String.format("Received message %d: %s",
+ this.counter, msgString));
+ if (context instanceof ModuleClient) {
+ try (JsonReader jsonReader = Json.createReader(new StringReader(msgString))) {
+ final JsonObject msgObject = jsonReader.readObject();
+ double temperature = msgObject.getJsonObject("machine").getJsonNumber("temperature").doubleValue();
+ long threshold = App.tempThreshold.get();
+ if (temperature >= threshold) {
+ ModuleClient client = (ModuleClient) context;
+ System.out.println(
+ String.format("Temperature above threshold %d. Sending message: %s",
+ threshold, msgString));
+ client.sendEventAsync(msg, eventCallback, msg, App.OUTPUT_NAME);
+ }
+ } catch (Exception e) {
+ e.printStackTrace();
+ }
+ }
+ return IotHubMessageResult.COMPLETE;
+ }
+ }
+ ```
+
+1. Add the following two static inner classes into class **App**. These classes update the tempThreshold variable when the module twin's desired property changes. All modules have their own module twin, which lets you configure the code that's running inside a module directly from the cloud.
+
+ ```java
+ protected static class DeviceTwinStatusCallBack implements IotHubEventCallback {
+ @Override
+ public void execute(IotHubStatusCode status, Object context) {
+ System.out.println("IoT Hub responded to device twin operation with status " + status.name());
+ }
+ }
+
+ protected static class OnProperty implements TwinPropertyCallBack {
+ @Override
+ public void TwinPropertyCallBack(Property property, Object context) {
+ if (!property.getIsReported()) {
+ if (property.getKey().equals(App.TEMP_THRESHOLD)) {
+ try {
+ long threshold = Math.round((double) property.getValue());
+ App.tempThreshold.set(threshold);
+ } catch (Exception e) {
+ System.out.println("Faile to set TemperatureThread with exception");
+ e.printStackTrace();
+ }
+ }
+ }
+ }
+ }
+ ```
+
+1. Add the following lines in to **main** method after **client.open()** to subscribe the module twin updates.
+
+ ```java
+ client.startTwin(new DeviceTwinStatusCallBack(), null, new OnProperty(), null);
+ Map<Property, Pair<TwinPropertyCallBack, Object>> onDesiredPropertyChange = new HashMap<Property, Pair<TwinPropertyCallBack, Object>>() {
+ {
+ put(new Property(App.TEMP_THRESHOLD, null), new Pair<TwinPropertyCallBack, Object>(new OnProperty(), null));
+ }
+ };
+ client.subscribeToTwinDesiredProperties(onDesiredPropertyChange);
+ client.getTwin();
+ ```
+
+1. Save the **App.java** file.
+
+1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+
+1. Add the **filtermodule** module twin to the deployment manifest. Insert the following JSON content at the bottom of the **moduleContent** section, after the **$edgeHub** module twin:
+
+ ```json
+ "filtermodule": {
+ "properties.desired":{
+ "TemperatureThreshold":25
+ }
+ }
+ ```
- The [SetInputMessageHandlerAsync](/dotnet/api/microsoft.azure.devices.client.moduleclient.setinputmessagehandlerasync) method sets up an input queue to receive incoming messages. Review this method and see how it initializes an input queue called **input1**.
+1. Save the **deployment.template.json** file.
- :::image type="content" source="./media/tutorial-develop-for-linux/declare-input-queue.png" alt-text="Screenshot showing where to find the input name in the SetInputMessageCallback constructor." lightbox="./media/tutorial-develop-for-linux/declare-input-queue.png":::
-4. Next, find the **SendEventAsync** method.
+# [Node.js](#tab/node)
- The [SendEventAsync](/dotnet/api/microsoft.azure.devices.client.moduleclient.sendeventasync) method processes received messages and sets up an output queue to pass them along. Review this method and see that it initializes an output queue called **output1**.
+1. In the Visual Studio Code explorer, open **modules** > **filtermodule** > **app.js**.
- :::image type="content" source="./media/tutorial-develop-for-linux/declare-output-queue.png" alt-text="Screenshot showing where to find the output name in SendEventAsync method." lightbox="./media/tutorial-develop-for-linux/declare-output-queue.png":::
+1. Add a temperature threshold variable below required node modules. The temperature threshold sets the value that the measured temperature must exceed in order for the data to be sent to IoT Hub.
-6. Open the **deployment.template.json** file.
+ ```javascript
+ var temperatureThreshold = 25;
+ ```
-7. Find the **modules** property nested in **$edgeAgent**.
+1. Replace the entire `PipeMessage` function with the `FilterMessage` function.
- There should be two modules listed here. One is the **SimulatedTemperatureSensor** module, which is included in all the templates by default to provide simulated temperature data that you can use to test your modules. The other is the **SampleModule** module that you created as part of this solution.
+ ```javascript
+ // This function filters out messages that report temperatures below the temperature threshold.
+ // It also adds the MessageType property to the message with the value set to Alert.
+ function filterMessage(client, inputName, msg) {
+ client.complete(msg, printResultFor('Receiving message'));
+ if (inputName === 'input1') {
+ var message = msg.getBytes().toString('utf8');
+ var messageBody = JSON.parse(message);
+ if (messageBody && messageBody.machine && messageBody.machine.temperature && messageBody.machine.temperature > temperatureThreshold) {
+ console.log(`Machine temperature ${messageBody.machine.temperature} exceeds threshold ${temperatureThreshold}`);
+ var outputMsg = new Message(message);
+ outputMsg.properties.add('MessageType', 'Alert');
+ client.sendOutputEvent('output1', outputMsg, printResultFor('Sending received message'));
+ }
+ }
+ }
-8. At the bottom of the file, find **properties.desired** within the **$edgeHub** module.
+ ```
- One of the functions of the IoT Edge hub module is to route messages between all the modules in a deployment. Review the values in the **routes** property. One route, **SampleModuleToIoTHub**, uses a wildcard character (**\***) to indicate any messages coming from any output queues in the SampleModule module. These messages go into *$upstream*, which is a reserved name that indicates IoT Hub. The other route, **sensorToSampleModule**, takes messages coming from the SimulatedTemperatureSensor module and routes them to the *input1* input queue that you saw initialized in the SampleModule code.
+1. Replace the function name `pipeMessage` with `filterMessage` in `client.on()` function.
- :::image type="content" source="./media/tutorial-develop-for-linux/deployment-routes.png" alt-text="Screenshot showing routes in the deployment.template.json file." lightbox="./media/tutorial-develop-for-linux/deployment-routes.png":::
+ ```javascript
+ client.on('inputMessage', function (inputName, msg) {
+ filterMessage(client, inputName, msg);
+ });
+ ```
+
+1. Copy the following code snippet into the `client.open()` function callback, after `client.on()` inside the `else` statement. This function is invoked when the desired properties are updated.
+
+ ```javascript
+ client.getTwin(function (err, twin) {
+ if (err) {
+ console.error('Error getting twin: ' + err.message);
+ } else {
+ twin.on('properties.desired', function(delta) {
+ if (delta.TemperatureThreshold) {
+ temperatureThreshold = delta.TemperatureThreshold;
+ }
+ });
+ }
+ });
+ ```
+
+1. Save the **app.js** file.
+
+1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+
+1. Add the filtermodule module twin to the deployment manifest. Insert the following JSON content at the bottom of the `moduleContent` section, after the `$edgeHub` module twin:
+
+ ```json
+ "filtermodule": {
+ "properties.desired":{
+ "TemperatureThreshold":25
+ }
+ }
+ ```
+
+1. Save the **deployment.template.json** file.
+
+# [Python](#tab/python)
+
+In this section, add the code that expands the *filtermodule* to analyze the messages before sending them. You'll add code that filters messages where the reported machine temperature is within the acceptable limits.
+
+1. In the Visual Studio Code explorer, open **modules** > **filtermodule** > **main.py**.
+
+1. At the top of the **main.py** file, import the **json** library:
+
+ ```python
+ import json
+ ```
+
+1. Add global definitions for **TEMPERATURE_THRESHOLD**, **RECEIVED_MESSAGES** and **TWIN_CALLBACKS** variables. The temperature threshold sets the value that the measured machine temperature must exceed for the data to be sent to the IoT hub.
+
+ ```python
+ # global counters
+ TEMPERATURE_THRESHOLD = 25
+ TWIN_CALLBACKS = 0
+ RECEIVED_MESSAGES = 0
+ ```
+
+1. Replace the **create_client** function with the following code:
+
+ ```python
+ def create_client():
+ client = IoTHubModuleClient.create_from_edge_environment()
+
+ # Define function for handling received messages
+ async def receive_message_handler(message):
+ global RECEIVED_MESSAGES
+ print("Message received")
+ size = len(message.data)
+ message_text = message.data.decode('utf-8')
+ print(" Data: <<<{data}>>> & Size={size}".format(data=message.data, size=size))
+ print(" Properties: {}".format(message.custom_properties))
+ RECEIVED_MESSAGES += 1
+ print("Total messages received: {}".format(RECEIVED_MESSAGES))
+
+ if message.input_name == "input1":
+ message_json = json.loads(message_text)
+ if "machine" in message_json and "temperature" in message_json["machine"] and message_json["machine"]["temperature"] > TEMPERATURE_THRESHOLD:
+ message.custom_properties["MessageType"] = "Alert"
+ print("ALERT: Machine temperature {temp} exceeds threshold {threshold}".format(
+ temp=message_json["machine"]["temperature"], threshold=TEMPERATURE_THRESHOLD
+ ))
+ await client.send_message_to_output(message, "output1")
+
+ # Define function for handling received twin patches
+ async def receive_twin_patch_handler(twin_patch):
+ global TEMPERATURE_THRESHOLD
+ global TWIN_CALLBACKS
+ print("Twin Patch received")
+ print(" {}".format(twin_patch))
+ if "TemperatureThreshold" in twin_patch:
+ TEMPERATURE_THRESHOLD = twin_patch["TemperatureThreshold"]
+ TWIN_CALLBACKS += 1
+ print("Total calls confirmed: {}".format(TWIN_CALLBACKS))
+
+ try:
+ # Set handler on the client
+ client.on_message_received = receive_message_handler
+ client.on_twin_desired_properties_patch_received = receive_twin_patch_handler
+ except:
+ # Cleanup if failure occurs
+ client.shutdown()
+ raise
+
+ return client
+ ```
+
+1. Save the **main.py** file.
+
+1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+
+1. Add the **filtermodule** module twin to the deployment manifest. Insert the following JSON content at the bottom of the **modulesContent** section, after the **$edgeHub** module twin:
+
+ ```json
+ "filtermodule": {
+ "properties.desired":{
+ "TemperatureThreshold":25
+ }
+ }
+ ```
+
+1. Save the **deployment.template.json** file.
++ ## Build and push your solution
-You've reviewed the module code and the deployment template to understand some key deployment concepts. Now, you're ready to build the SampleModule container image and push it to your container registry. With the IoT tools extension for Visual Studio Code, this step also generates the deployment manifest based on the information in the template file and the module information from the solution files.
+You've updated the module code and the deployment template to help understand some key deployment concepts. Now, you're ready to build your module container image and push it to your container registry.
### Sign in to Docker Provide your container registry credentials to Docker so that it can push your container image to storage in the registry.
-1. Open the Visual Studio Code integrated terminal by selecting **Terminal** > **New Terminal** or `Ctrl` + `Shift` + **`** (backtick).
+1. Open the Visual Studio Code integrated terminal by selecting **Terminal** > **New Terminal**.
-2. Sign in to Docker with the Azure Container Registry (ACR) credentials that you saved after creating the registry.
+1. Sign in to Docker with the Azure Container Registry (ACR) credentials that you saved after creating the registry.
- ```cmd/sh
+ ```bash
docker login -u <ACR username> -p <ACR password> <ACR login server> ```
- You may receive a security warning recommending the use of `--password-stdin`. While that is a recommended best practice for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
+ You may receive a security warning recommending the use of `--password-stdin`. While that's a recommended best practice for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
3. Sign in to the Azure Container Registry. You may need to [Install Azure CLI](/cli/azure/install-azure-cli) to use the `az` command. This command asks for your user name and password found in your container registry in **Settings** > **Access keys**.
Provide your container registry credentials to Docker so that it can push your c
Visual Studio Code now has access to your container registry, so it's time to turn the solution code into a container image.
-1. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
+In Visual Studio Code, open the **deployment.template.json** deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) describes the modules to be configured on the targeted IoT Edge device. Before deployment, you need to update your Azure Container Registry credentials and your module images with the proper `createOptions` values. For more information about createOption values, see [How to configure container create options for IoT Edge modules](how-to-use-create-options.md).
++
+If you're using an Azure Container Registry to store your module image, add your credentials to the *edgeAgent* > *settings* > *registryCredentials* section in **deployment.template.json**. Replace **myacr** with your own registry name and provide your password and **Login server** address. For example:
+
+```json
+"modulesContent": {
+"$edgeAgent": {
+"properties.desired": {
+ "schemaVersion": "1.1",
+ "runtime": {
+ "type": "docker",
+ "settings": {
+ "image": "mcr.microsoft.com/azureiotedge-agent:1.4",
+ "registryCredentials": {
+ "myacr": {
+ "username": "myacr",
+ "password": "<your_acr_password>",
+ "address": "myacr.azurecr.io"
+ },
+ "createOptions": {}
+ }
+ }
+ },
+```
+
+Add or replace the following stringified content to the *createOptions* value for each system (edgeHub and edgeAgent) and custom module (for example, tempSensor) listed. Change the values if necessary.
+
+```json
+"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
+```
+
+For example, the *filtermodule* configuration should be similar to:
+
+```json
+"filtermodule": {
+"version": "1.0",
+"type": "docker",
+"status": "running",
+"restartPolicy": "always",
+"settings": {
+ "image": "myacr.azurecr.io/filtermodule:0.0.1-amd64",
+ "createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
+}
+```
+
+#### Build module Docker image
+
+Use the module's Dockerfile to [build](https://docs.docker.com/engine/reference/commandline/build/) the module Docker image.
+
+```bash
+docker build --rm -f "<DockerFilePath>" -t <ImageNameAndTag> "<ContextPath>"
+```
+
+For example, to build the image for the local registry or an Azure container registry, use the following commands:
+
+```bash
+# Build the image for the local registry
+
+docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t localhost:5000/filtermodule:0.0.1-amd64 "./modules/filtermodule"
+
+# Or build the image for an Azure Container Registry
- :::image type="content" source="./media/tutorial-develop-for-linux/build-and-push-modules.png" alt-text="Screenshot showing the right-click menu option Build and Push I o T Edge Solution." lightbox="./media/tutorial-develop-for-linux/build-and-push-modules.png":::
+docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t myacr.azurecr.io/filtermodule:0.0.1-amd64 "./modules/filtermodule"
+```
- The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
+#### Push module Docker image
- This process may take several minutes the first time, but is faster the next time that you run the commands.
+[Push](https://docs.docker.com/engine/reference/commandline/push/) your module image to the local registry or a container registry.
-2. Open the **deployment.amd64.json** file in newly created config folder. The filename reflects the target architecture, so it's different if you chose a different architecture.
+```bash
+docker push <ImageName>
+```
-3. Notice that the two parameters that had placeholders now contain their proper values. The **registryCredentials** section has your registry username and password pulled from the .env file. The **SampleModule** has the full image repository with the name, version, and architecture tag from the module.json file.
+For example:
-4. Open the **module.json** file in the SampleModule folder.
+```bash
+# Push the Docker image to the local registry
-5. Change the version number for the module image. (The version, not the $schema-version.) For example, increment the patch version number to **0.0.2** just like if you made a small fix in the module code.
+docker push localhost:5000/filtermodule:0.0.1-amd64
+
+# Or push the Docker image to an Azure Container Registry
+az acr login --name myacr
+docker push myacr.azurecr.io/filtermodule:0.0.1-amd64
+```
+++
+In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
++
+The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
+
+This process may take several minutes the first time, but is faster the next time that you run the commands.
++
+#### Update the build and image
++
+Open the **deployment.amd64.json** file in newly created config folder. The filename reflects the target architecture, so it's different if you chose a different architecture.
+
+Notice that the two parameters that had placeholders now contain their proper values. The **registryCredentials** section has your registry username and password pulled from the *.env* file. The **filtermodule** has the full image repository with the name, version, and architecture tag from the *module.json* file.
++
+1. Open the **module.json** file in the *filtermodule* folder.
+
+1. Change the version number for the module image. For example, increment the patch version number to `"version": "0.0.2"` as if you made a small fix in the module code.
>[!TIP] >Module versions enable version control, and allow you to test changes on a small set of devices before deploying updates to production. If you don't increment the module version before building and pushing, then you overwrite the repository in your container registry.
-6. Save your changes to the module.json file.
+1. Save your changes to the **module.json** file.
++
+Build and push the updated image with a *0.0.2* version tag.
+
+For example, to build and push the image for the local registry or an Azure container registry, use the following commands:
+
+```bash
+# Build and push the 0.0.2 image for the local registry
-7. Right-click the **deployment.template.json** file again, and again select **Build and Push IoT Edge Solution**.
+docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t localhost:5000/filtermodule:0.0.2-amd64 "./modules/filtermodule"
-8. Open the **deployment.amd64.json** file again. Notice the build system doesn't create a new file when you run the build and push command again. Rather, the same file updates to reflect the changes. The SampleModule image now points to the 0.0.2 version of the container.
+docker push localhost:5000/filtermodule:0.0.2-amd64
-9. To further verify what the build and push command did, go to the [Azure portal](https://portal.azure.com) and navigate to your container registry.
+# Or build and push the 0.0.2 image for an Azure Container Registry
-10. In your container registry, select **Repositories** then **samplemodule**. Verify that both versions of the image push to the registry.
+docker build --rm -f "./modules/filtermodule/Dockerfile.amd64.debug" -t myacr.azurecr.io/filtermodule:0.0.2-amd64 "./modules/filtermodule"
- :::image type="content" source="./media/tutorial-develop-for-linux/view-repository-versions.png" alt-text="Screenshot of where to view both image versions in your container registry." lightbox="./media/tutorial-develop-for-linux/view-repository-versions.png":::
+docker push myacr.azurecr.io/filtermodule:0.0.2-amd64
+```
+++
+Right-click the **deployment.template.json** file again, and again select **Build and Push IoT Edge Solution**.
++
+Open the **deployment.amd64.json** file again. Notice the build system doesn't create a new file when you run the build and push command again. Rather, the same file updates to reflect the changes. The SampleModule image now points to the 0.0.2 version of the container.
+
+To further verify what the build and push command did, go to the [Azure portal](https://portal.azure.com) and navigate to your container registry.
+
+In your container registry, select **Repositories** then **samplemodule**. Verify that both versions of the image push to the registry.
+ <!--Alternative steps: Use Visual Studio Code Docker tools to view ACR images with tags-->
If you encounter errors when building and pushing your module image, it often ha
You verified that there are built container images stored in your container registry, so it's time to deploy them to a device. Make sure that your IoT Edge device is up and running. +
+Use the [IoT Edge Azure CLI set-modules](/cli/azure/iot/edge#az-iot-edge-set-modules) command to deploy the modules to the Azure IoT Hub. For example, to deploy the modules defined in the *deployment.template.json* file to IoT Hub *my-iot-hub* for the IoT Edge device *my-device*, use the following command:
+
+```azurecli
+az iot edge set-modules --hub-name my-iot-hub --device-id my-device --content ./deployment.debug.template.json --login "HostName=my-iot-hub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=<SharedAccessKey>"
+```
+
+> [!TIP]
+> You can find your IoT Hub shared access key in the Azure portal in your IoT Hub > **Security settings** > **Shared access policies** > **iothubowner**.
+>
+++ 1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
-2. Right-click the IoT Edge device that you want to deploy to, then select **Create Deployment for Single Device**.
+1. Right-click the IoT Edge device that you want to deploy to, then select **Create Deployment for Single Device**.
:::image type="content" source="./media/tutorial-develop-for-linux/create-deployment.png" alt-text="Screenshot showing how to create a deployment for a single device.":::
-3. In the file explorer, navigate into the **config** folder then select the **deployment.amd64.json** file.
+1. In the file explorer, navigate into the **config** folder then select the **deployment.amd64.json** file.
Don't use the deployment.template.json file, which doesn't have the container registry credentials or module image values in it. If you target a Linux ARM32 device, the deployment manifest's name is **deployment.arm32v7.json**.
-4. Under your device, expand **Modules** to see a list of deployed and running modules. Select the refresh button. You should see the new SimulatedTemperatureSensor and SampleModule modules running on your device.
+1. Under your device, expand **Modules** to see a list of deployed and running modules. Select the refresh button. You should see the new SimulatedTemperatureSensor and SampleModule modules running on your device.
It may take a few minutes for the modules to start. The IoT Edge runtime needs to receive its new deployment manifest, pull down the module images from the container runtime, then start each new module.
- :::image type="content" source="./media/tutorial-develop-for-linux/view-running-modules.png" alt-text="Screenshot where to view modules running on your I o T Edge device.":::
+ :::image type="content" source="./media/tutorial-develop-for-linux/view-running-modules.png" alt-text="Screenshot where to view modules running on your IOT Edge device.":::
## View messages from device
The SampleModule code receives messages through its input queue and passes them
1. In the Visual Studio Code explorer, right-click the IoT Edge device that you want to monitor, then select **Start Monitoring Built-in Event Endpoint**.
-2. Watch the output window in Visual Studio Code to see messages arriving at your IoT hub.
+1. Watch the output window in Visual Studio Code to see messages arriving at your IoT hub.
:::image type="content" source="./media/tutorial-develop-for-linux/view-d2c-messages.png" alt-text="Screenshot showing where to view incoming device to cloud messages."::: + ## View changes on device If you want to see what's happening on your device itself, use the commands in this section to inspect the IoT Edge runtime and modules running on your device.
Otherwise, you can delete the local configurations and the Azure resources that
## Next steps
-In this tutorial, you set up Visual Studio Code on your development machine and deployed your first IoT Edge module from it. Now that you know the basic concepts, try adding functionality to a module so that it can analyze the data passing through it. Choose your preferred language:
+In this tutorial, you set up Visual Studio Code on your development machine and deployed your first IoT Edge module that contains code to filter raw data generated by your IoT Edge device.
+
+You can continue on to the next tutorials to learn how Azure IoT Edge can help you deploy Azure cloud services to process and analyze data at the edge.
> [!div class="nextstepaction"]
-> [C](tutorial-c-module.md)
-> [C#](tutorial-csharp-module.md)
-> [Java](tutorial-java-module.md)
-> [Node.js](tutorial-node-module.md)
-> [Python](tutorial-python-module.md)
+> [Functions](tutorial-deploy-function.md)
+> [Stream Analytics](tutorial-deploy-stream-analytics.md)
+> [Custom Vision Service](tutorial-deploy-custom-vision.md)
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-java-module.md
- Title: Tutorial - Custom Java module tutorial using Azure IoT Edge
-description: This tutorial shows you how to create an IoT Edge module with Java code and deploy it to an edge device.
----- Previously updated : 07/30/2020-----
-# Tutorial: Develop a Java IoT Edge module using Linux containers
--
-You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the simulated IoT Edge device that you created in the Deploy Azure IoT Edge on a simulated device in the quickstart articles. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Use Visual Studio Code to create an IoT Edge Java module based on the Azure IoT Edge maven template package and Azure IoT Java device SDK.
-> * Use Visual Studio Code and Docker to create a Docker image and publish it to your registry.
-> * Deploy the module to your IoT Edge device.
-> * View generated data.
-
-The IoT Edge module that you create in this tutorial filters the temperature data that's generated by your device. It only sends messages upstream if the temperature is above a specified threshold. This type of analysis at the edge is useful for reducing the amount of data that's communicated to and stored in the cloud.
--
-## Prerequisites
-
-This tutorial demonstrates how to develop a module in **Java** using **Visual Studio Code**, and how to deploy it to an IoT Edge device. IoT Edge does not support Java modules built as Windows containers.
-
-Use the following table to understand your options for developing and deploying Java modules:
-
-| Java | Visual Studio Code | Visual Studio 2017/2019 |
-| - | | |
-| **Linux AMD64** | ![Use Visual Studio Code for Java modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM32** | ![Use Visual Studio Code for Java modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM64** | ![Use Visual Studio Code for Java modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
-
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing either of those tutorials, you should have the following prerequisites in place:
-
-* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge with Linux containers. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
-* A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
-* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers.
-
-To develop an IoT Edge module in Java, install the following additional prerequisites on your development machine:
-
-* [Java Extension Pack](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-java-pack) for Visual Studio Code.
-* [Java SE Development Kit 11](/azure/developer/java/fundamentals/java-support-on-azure), and [set the `JAVA_HOME` environment variable](https://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/) to point to your JDK installation.
-* [Maven](https://maven.apache.org/)
-
- >[!TIP]
- >The Java and Maven installation processes add environment variables to your system. Restart any open Visual Studio Code terminal, PowerShell, or command prompt instances after completing installation. This step ensures that these utilities can recognize the Java and Maven commands going forward.
-
-## Create a module project
-
-The following steps create an IoT Edge module project that's based on the Azure IoT Edge maven template package and Azure IoT Java device SDK. You create the project by using Visual Studio Code and the Azure IoT Edge extension.
-
-### Create a new project
-
-Create a Java solution template that you can customize with your own code.
-
-1. In Visual Studio Code, select **View** > **Command Palette** to open the Visual Studio Code command palette.
-
-2. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. Follow the prompts in the command palette to create your solution.
-
- | Field | Value |
- | -- | -- |
- | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
- | Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. |
- | Select module template | Choose **Java Module**. |
- | Provide a module name | Name your module **JavaModule**. |
- | Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server from the Overview page of your container registry in the Azure portal. <br><br>The final image repository looks like \<registry name\>.azurecr.io/javamodule. |
- | Provide value for groupId | Enter a group ID value or accept the default **com.edgemodule**. |
-
- ![Provide Docker image repository](./media/tutorial-java-module/repository.png)
-
-If it's your first time creating Java module, it might take several minutes to download the maven packages. When the solution is ready, the Visual Studio Code window loads your IoT Edge solution workspace. The solution workspace contains five top-level components:
-
-* The **modules** folder contains the Java code for your module and the Docker files to build your module as a container image.
-* The **\.env** file stores your container registry credentials.
-* The **deployment.template.json** file contains the information that the IoT Edge runtime uses to deploy modules on a device.
-* The **deployment.debug.template.json** file containers the debug version of modules.
-* You won't edit the **\.vscode** folder or **\.gitignore** file in this tutorial.
-
-If you didn't specify a container registry when creating your solution, but accepted the default localhost:5000 value, you won't have a \.env file.
-
-### Add your registry credentials
-
-The environment file stores the credentials for your container registry and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device.
-
-The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-
-1. In the Visual Studio Code explorer, open the .env file.
-2. Update the fields with the **username** and **password** values that you copied from your Azure container registry.
-3. Save this file.
-
->[!NOTE]
->This tutorial uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
-
-### Select your target architecture
-
-Currently, Visual Studio Code can develop Java modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.
-
-1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
-
-2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so will keep the default **amd64**.
-
-### Update the module with custom code
-
-1. In the Visual Studio Code explorer, open **modules** > **JavaModule** > **src** > **main** > **java** > **com** > **edgemodule** > **App.java**.
-
-2. Add the following code at the top of the file to import new referenced classes.
-
- ```java
- import java.io.StringReader;
- import java.util.concurrent.atomic.AtomicLong;
- import java.util.HashMap;
- import java.util.Map;
-
- import javax.json.Json;
- import javax.json.JsonObject;
- import javax.json.JsonReader;
-
- import com.microsoft.azure.sdk.iot.device.DeviceTwin.Pair;
- import com.microsoft.azure.sdk.iot.device.DeviceTwin.Property;
- import com.microsoft.azure.sdk.iot.device.DeviceTwin.TwinPropertyCallBack;
- ```
-
-3. Add the following definition into class **App**. This variable sets a temperature threshold. The measured machine temperature won't be reported to IoT Hub until it goes over this value.
-
- ```java
- private static final String TEMP_THRESHOLD = "TemperatureThreshold";
- private static AtomicLong tempThreshold = new AtomicLong(25);
- ```
-
-4. Replace the execute method of **MessageCallbackMqtt** with the following code. This method is called whenever the module receives an MQTT message from the IoT Edge hub. It filters out messages that report temperatures below the temperature threshold set via the module twin.
-
- ```java
- protected static class MessageCallbackMqtt implements MessageCallback {
- private int counter = 0;
- @Override
- public IotHubMessageResult execute(Message msg, Object context) {
- this.counter += 1;
-
- String msgString = new String(msg.getBytes(), Message.DEFAULT_IOTHUB_MESSAGE_CHARSET);
- System.out.println(
- String.format("Received message %d: %s",
- this.counter, msgString));
- if (context instanceof ModuleClient) {
- try (JsonReader jsonReader = Json.createReader(new StringReader(msgString))) {
- final JsonObject msgObject = jsonReader.readObject();
- double temperature = msgObject.getJsonObject("machine").getJsonNumber("temperature").doubleValue();
- long threshold = App.tempThreshold.get();
- if (temperature >= threshold) {
- ModuleClient client = (ModuleClient) context;
- System.out.println(
- String.format("Temperature above threshold %d. Sending message: %s",
- threshold, msgString));
- client.sendEventAsync(msg, eventCallback, msg, App.OUTPUT_NAME);
- }
- } catch (Exception e) {
- e.printStackTrace();
- }
- }
- return IotHubMessageResult.COMPLETE;
- }
- }
- ```
-
-5. Add the following two static inner classes into class **App**. These classes update the tempThreshold variable when the module twin's desired property changes. All modules have their own module twin, which lets you configure the code that's running inside a module directly from the cloud.
-
- ```java
- protected static class DeviceTwinStatusCallBack implements IotHubEventCallback {
- @Override
- public void execute(IotHubStatusCode status, Object context) {
- System.out.println("IoT Hub responded to device twin operation with status " + status.name());
- }
- }
-
- protected static class OnProperty implements TwinPropertyCallBack {
- @Override
- public void TwinPropertyCallBack(Property property, Object context) {
- if (!property.getIsReported()) {
- if (property.getKey().equals(App.TEMP_THRESHOLD)) {
- try {
- long threshold = Math.round((double) property.getValue());
- App.tempThreshold.set(threshold);
- } catch (Exception e) {
- System.out.println("Faile to set TemperatureThread with exception");
- e.printStackTrace();
- }
- }
- }
- }
- }
- ```
-
-6. Add the following lines in to **main** method after **client.open()** to subscribe the module twin updates.
-
- ```java
- client.startTwin(new DeviceTwinStatusCallBack(), null, new OnProperty(), null);
- Map<Property, Pair<TwinPropertyCallBack, Object>> onDesiredPropertyChange = new HashMap<Property, Pair<TwinPropertyCallBack, Object>>() {
- {
- put(new Property(App.TEMP_THRESHOLD, null), new Pair<TwinPropertyCallBack, Object>(new OnProperty(), null));
- }
- };
- client.subscribeToTwinDesiredProperties(onDesiredPropertyChange);
- client.getTwin();
- ```
-
-7. Save the App.java file.
-
-8. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
-
-9. Add the **JavaModule** module twin to the deployment manifest. Insert the following JSON content at the bottom of the **moduleContent** section, after the **$edgeHub** module twin:
-
- ```json
- "JavaModule": {
- "properties.desired":{
- "TemperatureThreshold":25
- }
- }
- ```
-
- ![Add module twin to deployment template](./media/tutorial-java-module/module-twin.png)
-
-10. Save the deployment.template.json file.
-
-## Build and push your module
-
-In the previous section, you created an IoT Edge solution and added code to the **JavaModule** to filter out messages where the reported machine temperature is below the acceptable limit. Now, build the solution as a container image and push it to your container registry.
-
-1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
-
-2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
-
- ```bash
- docker login -u <ACR username> -p <ACR password> <ACR login server>
- ```
-
- You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-
-3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
-
- The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, which is built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
-
- This process may take several minutes the first time, but is faster the next time that you run the commands.
-
-## Deploy modules to device
-
-Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
-
-Make sure that your IoT Edge device is up and running.
-
-1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
-
-2. Right-click the name of your IoT Edge device, then select **Create Deployment for Single Device**.
-
-3. Select the **deployment.amd64.json** file in the **config** folder and then click **Select Edge Deployment Manifest**. Do not use the deployment.template.json file.
-
-4. Under your device, expand **Modules** to see a list of deployed and running modules. Click the refresh button. You should see the new **JavaModule** running along with the **SimulatedTemperatureSensor** module and the **$edgeAgent** and **$edgeHub**.
-
- It may take a few minutes for the modules to start. The IoT Edge runtime needs to receive its new deployment manifest, pull down the module images from the container runtime, then start each new module.
-
-## View the generated data
-
-Once you apply the deployment manifest to your IoT Edge device, the IoT Edge runtime on the device collects the new deployment information and starts executing on it. Any modules running on the device that aren't included in the deployment manifest are stopped. Any modules missing from the device are started.
-
-1. In the Visual Studio Code explorer, right-click the name of your IoT Edge device and select **Start Monitoring Built-in Event Endpoint**.
-
-2. View the messages arriving at your IoT Hub. It may take a while for the messages to arrive. The IoT Edge device has to receive its new deployment and start all the modules. Then, the changes we made to the JavaModule code wait until the machine temperature reaches 25 degrees before sending messages. It also adds the message type **Alert** to any messages that reach that temperature threshold.
-
-## Edit the module twin
-
-We used the JavaModule module twin in the deployment manifest to set the temperature threshold at 25 degrees. You can use the module twin to change the functionality without having to update the module code.
-
-1. In Visual Studio Code, expand the details under your IoT Edge device to see the running modules.
-
-2. Right-click **JavaModule** and select **Edit module twin**.
-
-3. Find **TemperatureThreshold** in the desired properties. Change its value to a new temperature 5 degrees to 10 degrees higher than the latest reported temperature.
-
-4. Save the module twin file.
-
-5. Right-click anywhere in the module twin editing pane and select **Update module twin**.
-
-6. Monitor the incoming device-to-cloud messages. You should see the messages stop until the new temperature threshold is reached.
-
-## Clean up resources
-
-If you plan to continue to the next recommended article, you can keep the resources and configurations that you created and reuse them. You can also keep using the same IoT Edge device as a test device.
-
-Otherwise, you can delete the local configurations and the Azure resources that you created in this article to avoid charges.
--
-## Next steps
-
-In this tutorial, you created an IoT Edge module that filters raw data generated by your IoT Edge device.
-
-Continue to the next tutorials to learn how Azure IoT Edge helps you deploy Azure cloud services to process and analyze data at the edge.
-
-> [!div class="nextstepaction"]
-> [Functions](tutorial-deploy-function.md)
-> [Stream Analytics](tutorial-deploy-stream-analytics.md)
-> [Machine Learning](tutorial-deploy-machine-learning.md)
-> [Custom Vision Service](tutorial-deploy-custom-vision.md)
iot-edge Tutorial Node Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-node-module.md
-
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Tutorial develop Node.js module for Linux - Azure IoT Edge | Microsoft Docs
-description: This tutorial shows you how to create an IoT Edge module with Node.js code and deploy it to an edge device
----- Previously updated : 07/30/2020-----
-# Tutorial: Develop and deploy a Node.js IoT Edge module using Linux containers
--
-Use Visual Studio Code to develop Node.js code and deploy it to a device running Azure IoT Edge.
-
-You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data. You'll use the IoT Edge device that you created in the quickstarts. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Use Visual Studio Code to create an IoT Edge Node.js module
-> * Use Visual Studio Code and Docker to create a docker image and publish it to your registry
-> * Deploy the module to your IoT Edge device
-> * View generated data
-
-The IoT Edge module that you create in this tutorial filters the temperature data generated by your device. It only sends messages upstream if the temperature is above a specified threshold. This type of analysis at the edge is useful for reducing the amount of data communicated to and stored in the cloud.
--
-## Prerequisites
-
-This tutorial demonstrates how to develop a module in **Node.js** using **Visual Studio Code**, and how to deploy it to an IoT Edge device.
-
-IoT Edge does not support Node.js modules using Windows containers.
-
-Use the following table to understand your options for developing and deploying Node.js modules:
-
-| Node.js | Visual Studio Code | Visual Studio 2022 |
-| - | | |
-| **Linux AMD64** | ![Use Visual Studio Code for Node.js modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM32** | ![Use Visual Studio Code for Node.js modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM64** | ![Use Visual Studio Code for Node.js modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
-
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
-
-* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
-* A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
-* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers.
-
-To develop an IoT Edge module in Node.js, install the following additional prerequisites on your development machine:
-
-* [Node.js and npm](https://nodejs.org). The npm package is distributed with Node.js, which means that when you download Node.js, you automatically get npm installed on your computer.
-
-## Create a module project
-
-The following steps show you how to create an IoT Edge Node.js module using Visual Studio Code and the Azure IoT Edge extension.
-
-### Create a new project
-
-Use **npm** to create a Node.js solution template that you can build on top of.
-
-1. In Visual Studio Code, select **View** > **Integrated Terminal** to open the Visual Studio Code integrated terminal.
-
-2. In the integrated terminal, enter the following command to install **yeoman** and the generator for Node.js Azure IoT Edge module:
-
- ```cmd/sh
- npm install -g yo generator-azure-iot-edge-module
- ```
-
-3. Select **View** > **Command Palette** to open the Visual Studio Code command palette.
-
-4. In the command palette, type and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you've already signed in, you can skip this step.
-
-5. In the command palette, type and run the command **Azure IoT Edge: New IoT Edge solution**. Follow the prompts in the command palette to create your solution.
-
- | Field | Value |
- | -- | -- |
- | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
- | Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. |
- | Select module template | Choose **Node.js Module**. |
- | Provide a module name | Name your module **NodeModule**. |
- | Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server from the Overview page of your container registry in the Azure portal. <br><br>The final image repository looks like \<registry name\>.azurecr.io/nodemodule. |
-
- ![Provide Docker image repository](./media/tutorial-node-module/repository.png)
-
-### Add your registry credentials
-
-The environment file stores the credentials for your container repository and shares those with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device.
-
-The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-
-1. In the Visual Studio Code explorer, open the **.env** file.
-2. Update the fields with the **username** and **password** values that you copied from your Azure container registry.
-3. Save this file.
-
->[!NOTE]
->This tutorial uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
-
-### Select your target architecture
-
-Currently, Visual Studio Code can develop Node.js modules for Linux AMD64 and Linux ARM32v7 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.
-
-1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
-
-2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so will keep the default **amd64**.
-
-### Update the module with custom code
-
-Each template comes with sample code included, which takes simulated sensor data from the **SimulatedTemperatureSensor** module and routes it to IoT Hub. In this section, add code to have NodeModule analyze the messages before sending them.
-
-1. In the Visual Studio Code explorer, open **modules** > **NodeModule** > **app.js**.
-
-2. Add a temperature threshold variable below required node modules. The temperature threshold sets the value that the measured temperature must exceed in order for the data to be sent to IoT Hub.
-
- ```javascript
- var temperatureThreshold = 25;
- ```
-
-3. Replace the entire `PipeMessage` function with the `FilterMessage` function.
-
- ```javascript
- // This function filters out messages that report temperatures below the temperature threshold.
- // It also adds the MessageType property to the message with the value set to Alert.
- function filterMessage(client, inputName, msg) {
- client.complete(msg, printResultFor('Receiving message'));
- if (inputName === 'input1') {
- var message = msg.getBytes().toString('utf8');
- var messageBody = JSON.parse(message);
- if (messageBody && messageBody.machine && messageBody.machine.temperature && messageBody.machine.temperature > temperatureThreshold) {
- console.log(`Machine temperature ${messageBody.machine.temperature} exceeds threshold ${temperatureThreshold}`);
- var outputMsg = new Message(message);
- outputMsg.properties.add('MessageType', 'Alert');
- client.sendOutputEvent('output1', outputMsg, printResultFor('Sending received message'));
- }
- }
- }
-
- ```
-
-4. Replace the function name `pipeMessage` with `filterMessage` in `client.on()` function.
-
- ```javascript
- client.on('inputMessage', function (inputName, msg) {
- filterMessage(client, inputName, msg);
- });
- ```
-
-5. Copy the following code snippet into the `client.open()` function callback, after `client.on()` inside the `else` statement. This function is invoked when the desired properties are updated.
-
- ```javascript
- client.getTwin(function (err, twin) {
- if (err) {
- console.error('Error getting twin: ' + err.message);
- } else {
- twin.on('properties.desired', function(delta) {
- if (delta.TemperatureThreshold) {
- temperatureThreshold = delta.TemperatureThreshold;
- }
- });
- }
- });
- ```
-
-6. Save the app.js file.
-
-7. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
-
-8. Add the NodeModule module twin to the deployment manifest. Insert the following JSON content at the bottom of the `moduleContent` section, after the `$edgeHub` module twin:
-
- ```json
- "NodeModule": {
- "properties.desired":{
- "TemperatureThreshold":25
- }
- }
- ```
-
- ![Add module twin to deployment template](./media/tutorial-node-module/module-twin.png)
-
-9. Save the deployment.template.json file.
-
-## Build and push your module
-
-In the previous section, you created an IoT Edge solution and added code to the NodeModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.
-
-1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
-
-2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
-
- ```bash
- docker login -u <ACR username> -p <ACR password> <ACR login server>
- ```
-
- You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-
-3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
-
- The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
-
- This process may take several minutes the first time, but is faster the next time that you run the commands.
-
-## Deploy modules to device
-
-Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
-
-Make sure that your IoT Edge device is up and running.
-
-1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
-
-2. Right-click the name of your IoT Edge device, then select **Create Deployment for Single Device**.
-
-3. Select the **deployment.amd64.json** file in the **config** folder and then click **Select Edge Deployment Manifest**. Do not use the deployment.template.json file.
-
-4. Under your device, expand **Modules** to see a list of deployed and running modules. Click the refresh button. You should see the new **NodeModule** running along with the **SimulatedTemperatureSensor** module and the **$edgeAgent** and **$edgeHub**.
-
- It may take a few minutes for the modules to start. The IoT Edge runtime needs to receive its new deployment manifest, pull down the module images from the container runtime, then start each new module.
-
-## View the generated data
-
-Once you apply the deployment manifest to your IoT Edge device, the IoT Edge runtime on the device collects the new deployment information and starts executing on it. Any modules running on the device that aren't included in the deployment manifest are stopped. Any modules missing from the device are started.
-
-You can view the status of your IoT Edge device using the **Azure IoT Hub Devices** section of the Visual Studio Code explorer. Expand the details of your device to see a list of deployed and running modules.
-
-1. In the Visual Studio Code explorer, right-click the name of your IoT Edge device and select **Start Monitoring Built-in Event Endpoint**.
-
-2. View the messages arriving at your IoT Hub. It may take a while for the messages to arrive. The IoT Edge device has to receive its new deployment and start all the modules. Then, the changes we made to the NodeModule code wait until the machine temperature reaches 25 degrees before sending messages. It also adds the message type **Alert** to any messages that reach that temperature threshold.
-
-## Edit the module twin
-
-We used the NodeModule module twin in the deployment manifest to set the temperature threshold at 25 degrees. You can use the module twin to change the functionality without having to update the module code.
-
-1. In Visual Studio Code, expand the details under your IoT Edge device to see the running modules.
-
-2. Right-click **NodeModule** and select **Edit module twin**.
-
-3. Find **TemperatureThreshold** in the desired properties. Change its value to a new temperature 5 degrees to 10 degrees higher than the latest reported temperature.
-
-4. Save the module twin file.
-
-5. Right-click anywhere in the module twin editing pane and select **Update module twin**.
-
-6. Monitor the incoming device-to-cloud messages. You should see the messages stop until the new temperature threshold is reached.
-
-## Clean up resources
-
-If you plan to continue to the next recommended article, you can keep the resources and configurations that you created and reuse them. You can also keep using the same IoT Edge device as a test device.
-
-Otherwise, you can delete the local configurations and the Azure resources that you created in this article to avoid charges.
--
-## Next steps
-
-In this tutorial, you created an IoT Edge module that contains code to filter raw data generated by your IoT Edge device.
-
-You can continue on to the next tutorials to learn how Azure IoT Edge can help you deploy Azure cloud services to process and analyze data at the edge.
-
-> [!div class="nextstepaction"]
-> [Functions](tutorial-deploy-function.md)
-> [Stream Analytics](tutorial-deploy-stream-analytics.md)
-> [Machine Learning](tutorial-deploy-machine-learning.md)
-> [Custom Vision Service](tutorial-deploy-custom-vision.md)
iot-edge Tutorial Python Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-python-module.md
-
-# Mandatory fields. See more on aka.ms/skyeye/meta.
Title: Tutorial - Custom Python module tutorial using Azure IoT Edge
-description: This tutorial shows you how to create an IoT Edge module with Python code and deploy it to an edge device.
----- Previously updated : 08/04/2020-----
-# Tutorial: Develop and deploy a Python IoT Edge module using Linux containers
--
-Use Visual Studio Code to develop Python code and deploy it to a device running Azure IoT Edge.
-
-You can use Azure IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through creating and deploying an IoT Edge module that filters sensor data on the IoT Edge device that you set up in the quickstart. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> * Use Visual Studio Code to create an IoT Edge Python module.
-> * Use Visual Studio Code and Docker to create a Docker image and publish it to your registry.
-> * Deploy the module to your IoT Edge device.
-> * View generated data.
-
-The IoT Edge module that you create in this tutorial filters the temperature data that's generated by your device. It only sends messages upstream if the temperature is above a specified threshold. This type of analysis at the edge is useful for reducing the amount of data that's communicated to and stored in the cloud.
--
-## Prerequisites
-
-This tutorial demonstrates how to develop a module in **Python** using **Visual Studio Code**, and how to deploy it to an IoT Edge device.
-
-IoT Edge does not support Python modules using Windows containers.
-
-Use the following table to understand your options for developing and deploying Python modules using Linux containers:
-
-| Python | Visual Studio Code | Visual Studio 2017/2019 |
-| - | | |
-| **Linux AMD64** | ![Use Visual Studio Code for Python modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM32** | ![Use Visual Studio Code for Python modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM64** | ![Use Visual Studio Code for Python modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
-
-Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
-
-* A free or standard-tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
-* A device running Azure IoT Edge. You can use the quickstarts to set up a [Linux device](quickstart-linux.md) or [Windows device](quickstart.md).
-* A container registry, like [Azure Container Registry](../container-registry/index.yml).
-* [Visual Studio Code](https://code.visualstudio.com/) configured with the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-edge) and [Azure IoT Hub](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) extensions.
-* Download and install a [Docker compatible container management system](support.md#container-engines) on your development machine. Configure it to run Linux containers.
-
-To develop an IoT Edge module in Python, install the following additional prerequisites on your development machine:
-
-* [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for Visual Studio Code.
-* [Python](https://www.python.org/downloads/).
-* [Pip](https://pip.pypa.io/en/stable/installing/#installation) for installing Python packages (typically included with your Python installation).
-
->[!Note]
->Ensure that your `bin` folder is on your path for your platform. Typically `~/.local/` for UNIX and macOS, or `%APPDATA%\Python` on Windows.
-
-## Create a module project
-
-The following steps create an IoT Edge Python module by using Visual Studio Code and the Azure IoT Edge extension.
-
-### Create a new project
-
-Create a Python solution template that you can customize with your own code.
-
-1. In Visual Studio Code, select **View** > **Command Palette** to open the Visual Studio Code command palette.
-
-2. In the command palette, enter and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you're already signed in, you can skip this step.
-
-3. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. Follow the prompts and provide the following information to create your solution:
-
- | Field | Value |
- | -- | -- |
- | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
- | Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. |
- | Select module template | Choose **Python Module**. |
- | Provide a module name | Name your module **PythonModule**. |
- | Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the name you provided in the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server from the Overview page of your container registry in the Azure portal. <br><br>The final image repository looks like \<registry name\>.azurecr.io/pythonmodule. |
-
- ![Provide Docker image repository](./media/tutorial-python-module/repository.png)
-
-### Add your registry credentials
-
-The environment file stores the credentials for your container repository and shares them with the IoT Edge runtime. The runtime needs these credentials to pull your private images onto the IoT Edge device.
-
-The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-
-1. In the Visual Studio Code explorer, open the **.env** file.
-2. Update the fields with the **username** and **password** values that you copied from your Azure container registry.
-3. Save the .env file.
-
->[!NOTE]
->This tutorial uses admin login credentials for Azure Container Registry, which are convenient for development and test scenarios. When you're ready for production scenarios, we recommend a least-privilege authentication option like service principals. For more information, see [Manage access to your container registry](production-checklist.md#manage-access-to-your-container-registry).
-
-### Select your target architecture
-
-Visual Studio Code can develop Python modules for Linux AMD64, Linux ARM32v7, Linux ARM64v8, and Windows AMD64 devices. You need to select which architecture you're targeting with each solution, because the container is built and run differently for each architecture type. The default is Linux AMD64.
-
-1. Open the command palette and search for **Azure IoT Edge: Set Default Target Platform for Edge Solution**, or select the shortcut icon in the side bar at the bottom of the window.
-
-2. In the command palette, select the target architecture from the list of options. For this tutorial, we're using an Ubuntu virtual machine as the IoT Edge device, so will keep the default **amd64**.
-
-### Update the module with custom code
-
-Each template includes sample code, which takes simulated sensor data from the **SimulatedTemperatureSensor** module and routes it to the IoT hub. In this section, add the code that expands the **PythonModule** to analyze the messages before sending them.
-
-1. In the Visual Studio Code explorer, open **modules** > **PythonModule** > **main.py**.
-
-2. At the top of the **main.py** file, import the **json** library:
-
- ```python
- import json
- ```
-
-3. Add global definitions for **TEMPERATURE_THRESHOLD**, **RECEIVED_MESSAGES** and **TWIN_CALLBACKS** variables. The temperature threshold sets the value that the measured machine temperature must exceed for the data to be sent to the IoT hub.
-
- ```python
- # global counters
- TEMPERATURE_THRESHOLD = 25
- TWIN_CALLBACKS = 0
- RECEIVED_MESSAGES = 0
- ```
-
-4. Replace the **create_client** function with the following code:
-
- ```python
- def create_client():
- client = IoTHubModuleClient.create_from_edge_environment()
-
- # Define function for handling received messages
- async def receive_message_handler(message):
- global RECEIVED_MESSAGES
- print("Message received")
- size = len(message.data)
- message_text = message.data.decode('utf-8')
- print(" Data: <<<{data}>>> & Size={size}".format(data=message.data, size=size))
- print(" Properties: {}".format(message.custom_properties))
- RECEIVED_MESSAGES += 1
- print("Total messages received: {}".format(RECEIVED_MESSAGES))
-
- if message.input_name == "input1":
- message_json = json.loads(message_text)
- if "machine" in message_json and "temperature" in message_json["machine"] and message_json["machine"]["temperature"] > TEMPERATURE_THRESHOLD:
- message.custom_properties["MessageType"] = "Alert"
- print("ALERT: Machine temperature {temp} exceeds threshold {threshold}".format(
- temp=message_json["machine"]["temperature"], threshold=TEMPERATURE_THRESHOLD
- ))
- await client.send_message_to_output(message, "output1")
-
- # Define function for handling received twin patches
- async def receive_twin_patch_handler(twin_patch):
- global TEMPERATURE_THRESHOLD
- global TWIN_CALLBACKS
- print("Twin Patch received")
- print(" {}".format(twin_patch))
- if "TemperatureThreshold" in twin_patch:
- TEMPERATURE_THRESHOLD = twin_patch["TemperatureThreshold"]
- TWIN_CALLBACKS += 1
- print("Total calls confirmed: {}".format(TWIN_CALLBACKS))
-
- try:
- # Set handler on the client
- client.on_message_received = receive_message_handler
- client.on_twin_desired_properties_patch_received = receive_twin_patch_handler
- except:
- # Cleanup if failure occurs
- client.shutdown()
- raise
-
- return client
- ```
-
-7. Save the main.py file.
-
-8. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
-
-9. Add the **PythonModule** module twin to the deployment manifest. Insert the following JSON content at the bottom of the **moduleContent** section, after the **$edgeHub** module twin:
-
- ```json
- "PythonModule": {
- "properties.desired":{
- "TemperatureThreshold":25
- }
- }
- ```
-
- ![Add module twin to deployment template](./media/tutorial-python-module/module-twin.png)
-
-10. Save the deployment.template.json file.
-
-## Build and push your module
-
-In the previous section, you created an IoT Edge solution and added code to the PythonModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.
-
-1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
-
-2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
-
- ```bash
- docker login -u <ACR username> -p <ACR password> <ACR login server>
- ```
-
- You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-
-3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
-
- The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
-
- This process may take several minutes the first time, but is faster the next time that you run the commands.
-
-## Deploy modules to device
-
-Use the Visual Studio Code explorer and the Azure IoT Edge extension to deploy the module project to your IoT Edge device. You already have a deployment manifest prepared for your scenario, the **deployment.amd64.json** file in the config folder. All you need to do now is select a device to receive the deployment.
-
-Make sure that your IoT Edge device is up and running.
-
-1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
-
-2. Right-click the name of your IoT Edge device, then select **Create Deployment for Single Device**.
-
-3. Select the **deployment.amd64.json** file in the **config** folder and then click **Select Edge Deployment Manifest**. Do not use the deployment.template.json file.
-
-4. Under your device, expand **Modules** to see a list of deployed and running modules. Click the refresh button. You should see the new **PythonModule** running along with the **SimulatedTemperatureSensor** module and the **$edgeAgent** and **$edgeHub**.
-
- It may take a few minutes for the modules to start. The IoT Edge runtime needs to receive its new deployment manifest, pull down the module images from the container runtime, then start each new module.
-
-## View The generated data
-
-Once you apply the deployment manifest to your IoT Edge device, the IoT Edge runtime on the device collects the new deployment information and starts executing on it. Any modules running on the device that aren't included in the deployment manifest are stopped. Any modules missing from the device are started.
-
-You can view the status of your IoT Edge device using the **Azure IoT Hub Devices** section of the Visual Studio Code explorer. Expand the details of your device to see a list of deployed and running modules.
-
-1. In the Visual Studio Code explorer, right-click the name of your IoT Edge device and select **Start Monitoring Built-in Event Endpoint**.
-
-2. View the messages arriving at your IoT Hub. It may take a while for the messages to arrive. The IoT Edge device has to receive its new deployment and start all the modules. Then, the changes we made to the PythonModule code wait until the machine temperature reaches 25 degrees before sending messages. It also adds the message type **Alert** to any messages that reach that temperature threshold.
-
-## Edit the module twin
-
-We used the PythonModule module twin in the deployment manifest to set the temperature threshold at 25 degrees. You can use the module twin to change the functionality without having to update the module code.
-
-1. In Visual Studio Code, expand the details under your IoT Edge device to see the running modules.
-
-2. Right-click **PythonModule** and select **Edit module twin**.
-
-3. Find **TemperatureThreshold** in the desired properties. Change its value to a new temperature 5 degrees to 10 degrees higher than the latest reported temperature.
-
-4. Save the module twin file.
-
-5. Right-click anywhere in the module twin editing pane and select **Update module twin**.
-
-6. Monitor the incoming device-to-cloud messages. You should see the messages stop until the new temperature threshold is reached.
-
-## Clean up resources
-
-If you plan to continue to the next recommended article, you can keep the resources and configurations that you created and reuse them. You can also keep using the same IoT Edge device as a test device.
-
-Otherwise, you can delete the local configurations and the Azure resources that you used in this article to avoid charges.
--
-## Next steps
-
-In this tutorial, you created an IoT Edge module that contains code to filter raw data generated by your IoT Edge device.
-
-You can continue on to the next tutorials to learn how Azure IoT Edge can help you deploy Azure cloud services to process and analyze data at the edge.
-
-> [!div class="nextstepaction"]
-> [Functions](tutorial-deploy-function.md)
-> [Stream Analytics](tutorial-deploy-stream-analytics.md)
-> [Machine Learning](tutorial-deploy-machine-learning.md)
-> [Custom Vision Service](tutorial-deploy-custom-vision.md)
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
The tier also determines the throttling limits that IoT Hub enforces on all oper
Operation throttles are rate limitations that are applied in minute ranges and are intended to prevent abuse. They're also subject to [traffic shaping](#traffic-shaping).
-It's a good practice to throttle your calls so that you don't hit/exceed the throttling limits. If you do hit the limit, IoT Hub responds with error code 429 and the client should back-off and retry. These limits are per hub (or in some cases per hub/unit). For more information, see [Retry patterns](../iot-develop/how-to-use-reliability-features-in-sdks.md#retry-patterns).
+It's a good practice to throttle your calls so that you don't hit/exceed the throttling limits. If you do hit the limit, IoT Hub responds with error code 429 and the client should back-off and retry. These limits are per hub (or in some cases per hub/unit). For more information, see [Retry patterns](../iot-develop/concepts-manage-device-reconnections.md#retry-patterns).
### Basic and standard tier operations
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
Azure IoT SDKs are also available for the following
## Next steps
-Learn how to [manage connectivity and reliable messaging](../iot-develop/how-to-use-reliability-features-in-sdks.md) using the IoT Hub device SDKs.
+Learn how to [manage connectivity and reliable messaging](../iot-develop/concepts-manage-device-reconnections.md) using the IoT Hub device SDKs.
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ha-dr.md
Depending on the uptime goals you define for your IoT solutions, you should dete
## Intra-region HA
-The IoT Hub service provides intra-region HA by implementing redundancies in almost all layers of the service. The [SLA published by the IoT Hub service](https://azure.microsoft.com/support/legal/sl#retry-patterns) must be built in to the components interacting with a cloud application to deal with transient failures.
+The IoT Hub service provides intra-region HA by implementing redundancies in almost all layers of the service. The [SLA published by the IoT Hub service](https://azure.microsoft.com/support/legal/sl#retry-patterns) must be built in to the components interacting with a cloud application to deal with transient failures.
## Availability zones
iot-hub Iot Hub Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-connectivity.md
To help improve the documentation for everyone, leave a comment in the feedback
* To learn more about resolving transient issues, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
-* To learn more about the Azure IoT device SDKs and managing retries, see [Retry patterns](../iot-develop/how-to-use-reliability-features-in-sdks.md#retry-patterns).
+* To learn more about the Azure IoT device SDKs and managing retries, see [Retry patterns](../iot-develop/concepts-manage-device-reconnections.md#retry-patterns).
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
If you're using the CLI commands to migrate to a new root certificate and receiv
If you're experiencing general connectivity issues with IoT Hub, check out these troubleshooting resources:
-* [Connection and retry patterns with device SDKs](../iot-develop/how-to-use-reliability-features-in-sdks.md#connection-and-retry).
+* [Connection and retry patterns with device SDKs](../iot-develop/concepts-manage-device-reconnections.md#connection-and-retry).
* [Understand and resolve Azure IoT Hub error codes](troubleshoot-error-codes.md). If you're watching Azure Monitor after migrating certificates, you should look for a DeviceDisconnect event followed by a DeviceConnect event, as demonstrated in the following screenshot:
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
To resolve this error:
* Use the latest versions of the [IoT SDKs](iot-hub-devguide-sdks.md). * See the guidance for [IoT Hub internal server errors](#500xxx-internal-errors).
-We recommend using Azure IoT device SDKs to manage connections reliably. To learn more, see [Manage connectivity and reliable messaging by using Azure IoT Hub device SDKs](../iot-develop/how-to-use-reliability-features-in-sdks.md)
+We recommend using Azure IoT device SDKs to manage connections reliably. To learn more, see [Manage connectivity and reliable messaging by using Azure IoT Hub device SDKs](../iot-develop/concepts-manage-device-reconnections.md)
## 409001 DeviceAlreadyExists
You may see that your request to IoT Hub fails with an error that begins with 50
There can be a number of causes for a 500xxx error response. In all cases, the issue is most likely transient. While the IoT Hub team works hard to maintain [the SLA](https://azure.microsoft.com/support/legal/sla/iot-hub/), small subsets of IoT Hub nodes can occasionally experience transient faults. When your device tries to connect to a node that's having issues, you receive this error.
-To mitigate 500xxx errors, issue a retry from the device. To [automatically manage retries](../iot-develop/how-to-use-reliability-features-in-sdks.md#connection-and-retry), make sure you use the latest version of the [Azure IoT SDKs](iot-hub-devguide-sdks.md). For best practice on transient fault handling and retries, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
+To mitigate 500xxx errors, issue a retry from the device. To [automatically manage retries](../iot-develop/concepts-manage-device-reconnections.md#connection-and-retry), make sure you use the latest version of the [Azure IoT SDKs](iot-hub-devguide-sdks.md). For best practice on transient fault handling and retries, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
If the problem persists, check [Resource Health](iot-hub-azure-service-health-integration.md#check-iot-hub-health-with-azure-resource-health) and [Azure Status](https://azure.status.microsoft/) to see if IoT Hub has a known problem. You can also use the [manual failover feature](tutorial-manual-failover.md).
iot Iot Overview Device Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-connectivity.md
A device can establish a secure connection to an IoT hub:
The advantage of using DPS is that you don't need to configure all of your devices with connection-strings that are specific to your IoT hub. Instead, you configure your devices to connect to a well-known, common DPS endpoint where they discover their connection details. To learn more, see [Device Provisioning Service](../iot-dps/about-iot-dps.md).
+To learn more about implementing automatic reconnections to endpoints, see [Manage device reconnections to create resilient applications](../iot-develop/concepts-manage-device-reconnections.md).
+ ## Device connection strings A device connection string provides a device with the information it needs to connect securely to an IoT hub. The connection string includes the following information:
iot Iot Overview Device Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-development.md
The [IoT device development](../iot-develop/about-iot-develop.md) site includes
You can find more samples in the [code sample browser](/samples/browse/?expanded=azure&products=azure-iot%2Cazure-iot-edge%2Cazure-iot-pnp%2Cazure-rtos).
+To learn more about implementing automatic reconnections to endpoints, see [Manage device reconnections to create resilient applications](../iot-develop/concepts-manage-device-reconnections.md).
+ ## Device development without a device SDK Although you're recommended to use one of the device SDKS, there may be scenarios where you prefer not to. In these scenarios, your device code must directly use one of the communication protocols that IoT Hub and the Device Provisioning Service (DPS) support.
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
A compute instance is a fully managed cloud-based workstation optimized for your
* Secure your compute instance with **[No public IP](./how-to-secure-training-vnet.md)**. * The compute instance is also a secure training compute target similar to [compute clusters](how-to-create-attach-compute-cluster.md), but it's single node.
-* You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#create-on-behalf-of-preview)**.
+* You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#create-on-behalf-of)**.
* You can also **[use a setup script](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance as per your needs.
-* To save on costs, **[create a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop)** to automatically start and stop the compute instance, or [enable idle shutdown](how-to-create-manage-compute-instance.md#enable-idle-shutdown-preview)
+* To save on costs, **[create a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop)** to automatically start and stop the compute instance, or [enable idle shutdown](how-to-create-manage-compute-instance.md#enable-idle-shutdown)
## Tools and environments
Following tools and environments are already installed on the compute instance:
|-|:-:| |R kernel||
-You can [Add RStudio or Posit Workbench (formerly RStudio Workbench)](how-to-create-manage-compute-instance.md#add-custom-applications-such-as-rstudio-or-posit-workbench-preview) when you create the instance.
+You can [Add RStudio or Posit Workbench (formerly RStudio Workbench)](how-to-create-manage-compute-instance.md#add-custom-applications-such-as-rstudio-or-posit-workbench) when you create the instance.
|**PYTHON** tools & environments|Details| |-|-| |Anaconda Python|| |Jupyter and extensions|| |Jupyterlab and extensions||
-[Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install)</br>from PyPI|Includes most of the azureml extra packages. To see the full list, [open a terminal window on your compute instance](how-to-access-terminal.md) and run <br/> `conda list -n azureml_py36 azureml*` |
+[Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install)</br>from PyPI|Includes azure-ai-ml and many common azure extra packages. To see the full list, [open a terminal window on your compute instance](how-to-access-terminal.md) and run <br/> `conda list -n azureml_py310_sdkv2 ^azure` |
|Other PyPI packages|`jupytext`</br>`tensorboard`</br>`nbconvert`</br>`notebook`</br>`Pillow`| |Conda packages|`cython`</br>`numpy`</br>`ipykernel`</br>`scikit-learn`</br>`matplotlib`</br>`tqdm`</br>`joblib`</br>`nodejs`| |Deep learning packages|`PyTorch`</br>`TensorFlow`</br>`Keras`</br>`Horovod`</br>`MLFlow`</br>`pandas-ml`</br>`scrapbook`|
Follow the steps in the [Quickstart: Create workspace resources you need to get
For more options, see [create a new compute instance](how-to-create-manage-compute-instance.md?tabs=azure-studio#create).
-As an administrator, you can **[create a compute instance for others in the workspace (preview)](how-to-create-manage-compute-instance.md#create-on-behalf-of-preview)**.
+As an administrator, you can **[create a compute instance for others in the workspace](how-to-create-manage-compute-instance.md#create-on-behalf-of)**.
-You can also **[use a setup script (preview)](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance.
+You can also **[use a setup script](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance.
Other ways to create a compute instance: * Directly from the integrated notebooks experience.
A compute instance:
* Has a job queue. * Runs jobs securely in a virtual network environment, without requiring enterprises to open up SSH port. The job executes in a containerized environment and packages your model dependencies in a Docker container.
-* Can run multiple small jobs in parallel (preview). One job per core can run in parallel while the rest of the jobs are queued.
+* Can run multiple small jobs in parallel. One job per core can run in parallel while the rest of the jobs are queued.
* Supports single-node multi-GPU [distributed training](how-to-train-distributed-gpu.md) jobs You can use compute instance as a local inferencing deployment target for test/debug scenarios. > [!TIP]
-> The compute instance has 120GB OS disk. If you run out of disk space and get into an unusable state, please clear at least 5 GB disk space on OS disk (mounted on /) through the compute instance terminal by removing files/folders and then do `sudo reboot`. Temporary disk will be freed after restart; you do not need to clear space on temp disk manually. To access the terminal go to compute list page or compute instance details page and click on **Terminal** link. You can check available disk space by running `df -h` on the terminal. Clear at least 5 GB space before doing `sudo reboot`. Please do not stop or restart the compute instance through the Studio until 5 GB disk space has been cleared. Auto shutdowns, including scheduled start or stop as well as idle shutdowns(preview), will not work if the CI disk is full.
+> The compute instance has 120GB OS disk. If you run out of disk space and get into an unusable state, please clear at least 5 GB disk space on OS disk (mounted on /) through the compute instance terminal by removing files/folders and then do `sudo reboot`. Temporary disk will be freed after restart; you do not need to clear space on temp disk manually. To access the terminal go to compute list page or compute instance details page and click on **Terminal** link. You can check available disk space by running `df -h` on the terminal. Clear at least 5 GB space before doing `sudo reboot`. Please do not stop or restart the compute instance through the Studio until 5 GB disk space has been cleared. Auto shutdowns, including scheduled start or stop as well as idle shutdowns, will not work if the CI disk is full.
## Next steps
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
When created, these compute resources are automatically part of your workspace,
> [!NOTE] > To avoid charges when the compute is idle: > * For compute *cluster* make sure the minimum number of nodes is set to 0.
-> * For a compute *instance*, [enable idle shutdown](how-to-create-manage-compute-instance.md#enable-idle-shutdown-preview).
+> * For a compute *instance*, [enable idle shutdown](how-to-create-manage-compute-instance.md#enable-idle-shutdown).
### Supported VM series and sizes
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
When you create resources for an Azure Machine Learning workspace, resources for
* [Application Insights](https://azure.microsoft.com/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work.
-* [Enable idle shutdown (preview)](how-to-create-manage-compute-instance.md#enable-idle-shutdown-preview) to save on cost when the VM has been idle for a specified time period.
+* [Enable idle shutdown (preview)](how-to-create-manage-compute-instance.md#enable-idle-shutdown) to save on cost when the VM has been idle for a specified time period.
* Or [set up a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it.
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
To access the terminal:
In addition to the steps above, you can also access the terminal from:
-* RStudio or Posit Workbench (formerly RStudio Workbench) (See [Add custom applications such as RStudio or Posit Workbench)](how-to-create-manage-compute-instance.md?tabs=python#add-custom-applications-such-as-rstudio-or-posit-workbench-preview)): Select the **Terminal** tab on top left.
+* RStudio or Posit Workbench (formerly RStudio Workbench) (See [Add custom applications such as RStudio or Posit Workbench)](how-to-create-manage-compute-instance.md?tabs=python#add-custom-applications-such-as-rstudio-or-posit-workbench)): Select the **Terminal** tab on top left.
* Jupyter Lab: Select the **Terminal** tile under the **Other** heading in the Launcher tab. * Jupyter: Select **New>Terminal** on top right in the Files tab. * SSH to the machine, if you enabled SSH access when the compute instance was created.
To integrate Git with your Azure Machine Learning workspace, see [Git integrati
Or you can install packages directly in Jupyter Notebook, RStudio, or Posit Workbench (formerly RStudio Workbench):
-* RStudio or Posit Workbench(see [Add custom applications such as RStudio or Posit Workbench](how-to-create-manage-compute-instance.md#add-custom-applications-such-as-rstudio-or-posit-workbench-preview)): Use the **Packages** tab on the bottom right, or the **Console** tab on the top left.
+* RStudio or Posit Workbench(see [Add custom applications such as RStudio or Posit Workbench](how-to-create-manage-compute-instance.md#add-custom-applications-such-as-rstudio-or-posit-workbench)): Use the **Packages** tab on the bottom right, or the **Console** tab on the top left.
* Python: Add install code and execute in a Jupyter Notebook cell. > [!NOTE]
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
Create one anytime from within your Azure Machine Learning workspace. Provide ju
To learn more about compute instances, including how to install packages, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md). > [!TIP]
-> To prevent incurring charges for an unused compute instance, [enable idle shutdown](how-to-create-manage-compute-instance.md#enable-idle-shutdown-preview).
+> To prevent incurring charges for an unused compute instance, [enable idle shutdown](how-to-create-manage-compute-instance.md#enable-idle-shutdown).
In addition to a Jupyter Notebook server and JupyterLab, you can use compute instances in the [integrated notebook feature inside of Azure Machine Learning studio](how-to-run-jupyter-notebooks.md).
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Learn how to create and manage a [compute instance](concept-compute-instance.md)
Use a compute instance as your fully configured and managed development environment in the cloud. For development and testing, you can also use the instance as a [training compute target](concept-compute-target.md#training-compute-targets). A compute instance can run multiple jobs in parallel and has a job queue. As a development environment, a compute instance can't be shared with other users in your workspace.
-> [!IMPORTANT]
-> Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- In this article, you learn how to: * [Create](#create) a compute instance * [Manage](#manage) (start, stop, restart, delete) a compute instance * [Create a schedule](#schedule-automatic-start-and-stop) to automatically start and stop the compute instance
-* [Enable idle shutdown](#enable-idle-shutdown-preview)
+* [Enable idle shutdown](#enable-idle-shutdown)
You can also [use a setup script](how-to-customize-compute-instance.md) to create the compute instance with your own custom environment.
Where the file *create-instance.yml* is:
1. Select **Create** unless you want to configure advanced settings for the compute instance. 1. <a name="advanced-settings"></a> Select **Next: Advanced Settings** if you want to:
- * Enable idle shutdown (preview). Configure a compute instance to automatically shut down if it's inactive. For more information, see [enable idle shutdown](#enable-idle-shutdown-preview).
+ * Enable idle shutdown. Configure a compute instance to automatically shut down if it's inactive. For more information, see [enable idle shutdown](#enable-idle-shutdown).
* Add schedule. Schedule times for the compute instance to automatically start and/or shut down. See [schedule details](#schedule-automatic-start-and-stop) below. * Enable SSH access. Follow the [detailed SSH access instructions](#enable-ssh-access) below. * Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
- * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of-preview)
- * Provision with a setup script (preview) - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md).
+ * Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of)
+ * Provision with a setup script - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md).
You can also create a compute instance with an [Azure Resource Manager template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-compute-create-computeinstance).
SSH access is disabled by default. SSH access can't be changed after creation.
-## Create on behalf of (preview)
-
-> [!IMPORTANT]
-> Items marked (preview) below are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+## Create on behalf of
As an administrator, you can create a compute instance on behalf of a data scientist and assign the instance to them with:
The data scientist can start, stop, and restart the compute instance. They can u
* Posit Workbench (formerly RStudio Workbench) * Integrated notebooks
-## Enable idle shutdown (preview)
-
-> [!IMPORTANT]
-> Items marked (preview) below are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+## Enable idle shutdown
To avoid getting charged for a compute instance that is switched on but inactive, you can configure when to shut down your compute instance due to inactivity.
Activity on custom applications installed on the compute instance isn't consider
Also, if a compute instance has already been idle for a certain amount of time, if idle shutdown settings are updated to an amount of time shorter than the current idle duration, the idle time clock will be reset to 0. For example, if the compute instance has already been idle for 20 minutes, and the shutdown settings are updated to 15 minutes, the idle time clock will be reset to 0.
-Use **Manage preview features** to access this feature.
-
-1. In the workspace toolbar, select the **Manage preview features** image.
-1. Scroll down until you see **Configure auto-shutdown for idle compute instances**.
-1. Toggle the switch to enable the feature.
--
-Once enabled, the setting can be configured during compute instance creation or for existing compute instances via the following interfaces:
+The setting can be configured during compute instance creation or for existing compute instances via the following interfaces:
# [Python SDK](#tab/python)
You can also create your own custom Azure policy. For example, if the below poli
Define multiple schedules for auto-shutdown and auto-start. For instance, create a schedule to start at 9 AM and stop at 6 PM from Monday-Thursday, and a second schedule to start at 9 AM and stop at 4 PM for Friday. You can create a total of four schedules per compute instance.
-Schedules can also be defined for [create on behalf of](#create-on-behalf-of-preview) compute instances. You can create a schedule that creates the compute instance in a stopped state. Stopped compute instances are useful when you create a compute instance on behalf of another user.
+Schedules can also be defined for [create on behalf of](#create-on-behalf-of) compute instances. You can create a schedule that creates the compute instance in a stopped state. Stopped compute instances are useful when you create a compute instance on behalf of another user.
Prior to a scheduled shutdown, users will see a notification alerting them that the Compute Instance is about to shut down. At that point, the user can choose to dismiss the upcoming shutdown event, if for example they are in the middle of using their Compute Instance.
az login --identity --username $DEFAULT_IDENTITY_CLIENT_ID
> [!NOTE] > You cannot use ```azcopy``` when trying to use managed identity. ```azcopy login --identity``` will not work.
-## Add custom applications such as RStudio or Posit Workbench (preview)
-
-> [!IMPORTANT]
-> Items marked (preview) below are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+## Add custom applications such as RStudio or Posit Workbench
You can set up other applications, such as RStudio, or Posit Workbench (formerly RStudio Workbench), when creating a compute instance. Follow these steps in studio to set up a custom application on your compute instance
For each compute instance in a workspace that you created (or that was created f
* SSH into compute instance. SSH access is disabled by default but can be enabled at compute instance creation time. SSH access is through public/private key mechanism. The tab will give you details for SSH connection such as IP address, username, and port number. In a virtual network deployment, disabling SSH prevents SSH access from public internet, you can still SSH from within virtual network using private IP address of compute instance node and port 22. * Select the compute name to: * View details about a specific compute instance such as IP address, and region.
- * Create or modify the schedule for starting and stopping the compute instance (preview). Scroll down to the bottom of the page to edit the schedule.
+ * Create or modify the schedule for starting and stopping the compute instance. Scroll down to the bottom of the page to edit the schedule.
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
Last updated 03/15/2022
#Customer intent: I'm a data scientist with ML knowledge in the machine learning space, looking to build ML models using data in Azure Machine Learning with full control of the model training including debugging and monitoring of live jobs.
-# Debug jobs and monitor training progress (preview)
-
-> [!IMPORTANT]
-> Items marked (preview) in this article are currently in public preview.
-> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Debug jobs and monitor training progress
Machine learning model training is usually an iterative process and requires significant experimentation. With the Azure Machine Learning interactive job experience, data scientists can use the Azure Machine Learning Python SDKv2, Azure Machine Learning CLIv2 or the Azure Studio to access the container where their job is running. Once the job container is accessed, users can iterate on training scripts, monitor training progress or debug the job remotely like they typically do on their local machines. Jobs can be interacted with via different training applications including **JupyterLab, TensorBoard, VS Code** or by connecting to the job container directly via **SSH**.
Interactive training is supported on **Azure Machine Learning Compute Clusters**
## Prerequisites - Review [getting started with training on Azure Machine Learning](./how-to-train-model.md).-- To use this feature in Azure Machine Learning studio, enable the "Debug & monitor your training jobs" flight via the [preview panel](./how-to-enable-preview-features.md#how-do-i-enable-preview-features). - To use **VS Code**, [follow this guide](how-to-setup-vs-code.md) to set up the Azure Machine Learning extension. - Make sure your job environment has the `openssh-server` and `ipykernel ~=6.0` packages installed (all Azure Machine Learning curated training environments have these packages installed by default). - Interactive applications can't be enabled on distributed training runs where the distribution type is anything other than Pytorch, Tensorflow or MPI. Custom distributed training setup (configuring multi-node training without using the above distribution frameworks) is not currently supported.
By specifying interactive applications at job creation, you can connect directly
6. Review and create the job.
-If you don't see the above options, make sure you have enabled the "Debug & monitor your training jobs" flight via the [preview panel](./how-to-enable-preview-features.md#how-do-i-enable-preview-features).
- # [Python SDK](#tab/python) 1. Define the interactive services you want to use for your job. Make sure to replace `your compute name` with your own value. If you want to use your own custom environment, follow the examples in [this tutorial](how-to-manage-environments-v2.md) to create a custom environment.
To interact with your running job, click the button **Debug and monitor** on the
:::image type="content" source="media/interactive-jobs/debug-and-monitor.png" alt-text="Screenshot of interactive jobs debug and monitor panel location."::: +++++++++++++++++++++++++++++++++++++++ Clicking the applications in the panel opens a new tab for the applications. You can access the applications only when they are in **Running** status and only the **job owner** is authorized to access the applications. If you're training on multiple nodes, you can pick the specific node you would like to interact with. :::image type="content" source="media/interactive-jobs/interactive-jobs-application-list.png" alt-text="Screenshot of interactive jobs right panel information. Information content will vary depending on the user's data.":::
-It might take a few minutes to start the job and the training applications specified during job creation. If you don't see the above options, make sure you have enabled the "Debug & monitor your training jobs" flight via the [preview panel](./how-to-enable-preview-features.md#how-do-i-enable-preview-features).
+It might take a few minutes to start the job and the training applications specified during job creation.
# [Python SDK](#tab/python) - Once the job is submitted, you can use `ml_client.jobs.show_services("<job name>", <compute node index>)` to view the interactive service endpoints.
When you click on the endpoints to interact when your job, you're taken to the u
- If you have logged tensorflow events for your job, you can use TensorBoard to monitor the metrics when your job is running. :::image type="content" source="./media/interactive-jobs/tensorboard-open.png" alt-text="Screenshot of interactive jobs tensorboard panel when first opened. This information will vary depending upon customer data":::
-
-If you don't see the above options, make sure you have enabled the "Debug & monitor your training jobs" flight via the [preview panel](./how-to-enable-preview-features.md#how-do-i-enable-preview-features).
### End job Once you're done with the interactive training, you can also go to the job details page to cancel the job which will release the compute resource. Alternatively, use `az ml job cancel -n <your job name>` in the CLI or `ml_client.job.cancel("<job name>")` in the SDK.
To submit a job with a debugger attached and the execution paused, you can use d
## Next steps + Learn more about [how and where to deploy a model](./how-to-deploy-online-endpoints.md).+
machine-learning How To Manage Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md
Low-Priority VMs have a single quota separate from the dedicated quota value, wh
## Schedule compute instances When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work.
-* [Enable idle shutdown (preview)](how-to-create-manage-compute-instance.md#enable-idle-shutdown-preview) to save on cost when the VM has been idle for a specified time period.
+* [Enable idle shutdown (preview)](how-to-create-manage-compute-instance.md#enable-idle-shutdown) to save on cost when the VM has been idle for a specified time period.
* Or [set up a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it. ## Use reserved instances
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
For more information, see [Container Instances limits](../azure-resource-manager
### Storage Azure Storage has a limit of 250 storage accounts per region, per subscription. This limit includes both Standard and Premium storage accounts.
-To increase the limit, make a request through [Azure Support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/). The Azure Storage team will review your case and can approve up to 250 storage accounts for a region.
-- ## Workspace-level quotas Use workspace-level quotas to manage Azure Machine Learning compute target allocation between multiple [workspaces](concept-workspace.md) in the same subscription.
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
description: Learn how to schedule pipeline jobs that allow you to automate rout
--++ Previously updated : 12/11/2022 Last updated : 03/27/2023
In this article, you'll learn how to programmatically schedule a pipeline to run
- An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md). - Understanding of Azure Machine Learning pipelines. See [what are machine learning pipelines](concept-ml-pipelines.md), and how to create pipeline job in [CLI v2](how-to-create-component-pipelines-cli.md) or [SDK v2](how-to-create-component-pipeline-python.md).-- To enable this feature:
- 1. Navigate to Azure Machine Learning studio UI.
- 2. Select **Manage preview features** (megaphone icon) among the icons on the top right side of the screen.
-
- [!INCLUDE [machine-learning-preview-generic-disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-
- 1. In **Managed preview feature** panel, toggle on **Create and manage your pipeline schedule** feature.
- :::image type="content" source="./media/how-to-schedule-pipeline-job/manage-preview-features.png" alt-text="Screenshot of manage preview features toggled on." lightbox= "./media/how-to-schedule-pipeline-job/manage-preview-features.png":::
machine-learning How To Share Data Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-data-across-workspaces-with-registries.md
+
+ Title: Share data across workspaces with registries (preview)
+
+description: Learn how practice cross-workspace MLOps and collaborate across teams by sharing data through registries.
++++++ Last updated : 03/21/2023++++
+# Share data across workspaces with registries (preview)
+
+Azure Machine Learning registry (preview) enables you to collaborate across workspaces within your organization. Using registries, you can share models, components, environments and data. In this article, you learn how to:
+
+* Create a data asset in the registry.
+* Share an existing data asset from workspace to registry
+* Use the data asset from registry as input to a model training job in a workspace.
++
+### Key scenario addressed by data sharing using Azure Machine Learning registry
+
+You may want to have data shared across multiple teams, projects, or workspaces in a central location. Such data doesn't have sensitive access controls and can be broadly used in the organization.
+
+Examples include:
+* A team wants to share a public dataset that is preprocessed and ready to use in experiments.
+* Your organization has acquired a particular dataset for a project from an external vendor and wants to make it available to all teams working on a project.
+* A team wants to share data assets across workspaces in different regions.
+
+In these scenarios, you can create a data asset in a registry or share an existing data asset from a workspace to a registry. This data asset can then be used across multiple workspaces.
+
+### Scenarios NOT addressed by data sharing using Azure Machine Learning registry
+
+* Sharing sensitive data that requires fine grained access control. You can't create a data asset in a registry to share with a small subset of users/workspaces while the registry is accessible by many other users in the org.
+
+* Sharing data that is available in existing storage that must not be copied or is too large or too expensive to be copied. Whenever data assets are created in a registry, a copy of data is ingested into the registry storage so that it can be replicated.
+
+### Data asset types supported by Azure Machine Learning registry
+
+> [!TIP]
+> Check out the following **canonical scenarios** when deciding if you want to use `uri_file`, `uri_folder`, or `mltable` for your scenario.
+
+You can create three data asset types:
+
+| Type | V2 API | Canonical scenario |
+| :- |:-| :--|
+| **File:** Reference a single file | `uri_file` | Read/write a single file - the file can have any format. |
+|**Folder:** Reference a single folder | `uri_folder` | You must read/write a directory of parquet/CSV files into Pandas/Spark. Deep-learning with images, text, audio, video files located in a directory. |
+| **Table:** Reference a data table | `mltable` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data. |
+
+### Paths supported by Azure Machine Learning registry
+
+When you create a data asset, you must specify a **path** parameter that points to the data location. Currently, the only supported paths are to locations on your local computer.
+
+> [!TIP]
+> "Local" means the local storage for the computer you are using. For example, if you're using a laptop, the local drive. If an Azure Machine Learning compute instance, the "local" drive of the compute instance.
++
+## Prerequisites
+
+Before following the steps in this article, make sure you have the following prerequisites:
+
+- Familiarity with [Azure Machine Learning registries](concept-machine-learning-registries-mlops.md) and [Data concepts in Azure Machine Learning](concept-data.md).
+
+- An Azure Machine Learning registry (preview) to share data. To create a registry, see [Learn how to create a registry](how-to-manage-registries.md).
+
+- An Azure Machine Learning workspace. If you don't have one, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create one.
+
+ > [!IMPORTANT]
+ > The Azure region (location) where you create your workspace must be in the list of supported regions for Azure Machine Learning registry.
+
+- The *environment* and *component* created from the [How to share models, components, and environments](how-to-share-models-pipelines-across-workspaces-with-registries.md) article.
+
+- The Azure CLI and the `ml` extension __or__ the Azure Machine Learning Python SDK v2:
+
+ # [Azure CLI](#tab/cli)
+
+ To install the Azure CLI and extension, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+
+ > [!IMPORTANT]
+ > * The CLI examples in this article assume that you are using the Bash (or compatible) shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about).
+ > * The examples also assume that you have configured defaults for the Azure CLI so that you don't have to specify the parameters for your subscription, workspace, resource group, or location. To set default settings, use the following commands. Replace the following parameters with the values for your configuration:
+ >
+ > * Replace `<subscription>` with your Azure subscription ID.
+ > * Replace `<workspace>` with your Azure Machine Learning workspace name.
+ > * Replace `<resource-group>` with the Azure resource group that contains your workspace.
+ > * Replace `<location>` with the Azure region that contains your workspace.
+ >
+ > ```azurecli
+ > az account set --subscription <subscription>
+ > az configure --defaults workspace=<workspace> group=<resource-group> location=<location>
+ > ```
+ > You can see what your current defaults are by using the `az configure -l` command.
+
+ # [Python SDK](#tab/python)
+
+ To install the Python SDK v2, use the following command:
+
+ ```bash
+ pip install --pre azure-ai-ml
+ ```
+
+
+
+### Clone examples repository
+
+The code examples in this article are based on the `nyc_taxi_data_regression` sample in the [examples repository](https://github.com/Azure/azureml-examples). To use these files on your development environment, use the following commands to clone the repository and change directories to the example:
+
+```bash
+git clone https://github.com/Azure/azureml-examples
+cd azureml-examples
+```
+
+# [Azure CLI](#tab/cli)
+
+For the CLI example, change directories to `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` in your local clone of the [examples repository](https://github.com/Azure/azureml-examples).
+
+```bash
+cd cli/jobs/pipelines-with-components/nyc_taxi_data_regression
+```
+
+# [Python SDK](#tab/python)
+
+For the Python SDK example, use the `nyc_taxi_data_regression` sample from the [examples repository](https://github.com/Azure/azureml-examples). The sample notebook is available in the `sdk/python/assets/assets-in-registry` directory. All the sample YAML files model training code, sample data for training and inference is available in `cli/jobs/pipelines-with-components/nyc_taxi_data_regression`. Change to the `sdk/resources/registry` directory and open the notebook if you'd like to step through a notebook to try out the code in this document.
+++
+### Create SDK connection
+
+> [!TIP]
+> This step is only needed when using the Python SDK.
+
+Create a client connection to both the Azure Machine Learning workspace and registry. In the following example, replace the `<...>` placeholder values with the values appropriate for your configuration. For example, your Azure subscription ID, workspace name, registry name, etc.:
+
+```python
+ml_client_workspace = MLClient( credential=credential,
+ subscription_id = "<workspace-subscription>",
+ resource_group_name = "<workspace-resource-group",
+ workspace_name = "<workspace-name>")
+print(ml_client_workspace)
+
+ml_client_registry = MLClient(credential=credential,
+ registry_name="<REGISTRY_NAME>",
+ registry_location="<REGISTRY_REGION>")
+print(ml_client_registry)
+```
+
+## Create data in registry
+
+The data asset created in this step is used later in this article when submitting a training job.
+
+# [Azure CLI](#tab/cli)
+
+> [!TIP]
+> The same CLI command `az ml data create` can be used to create data in a workspace or registry. Running the command with `--workspace-name` command creates the data in a workspace whereas running the command with `--registry-name` creates the data in the registry.
+
+The data source is located in the [examples repository](https://github.com/Azure/azureml-examples) that you cloned earlier. Under the local clone, go to the following directory path: `cli/jobs/pipelines-with-components/nyc_taxi_data_regression`. In this directory, create a YAML file named `data-registry.yml` and use the following YAML as the contents of the file:
+
+```YAML
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: transformed-nyc-taxt-data
+description: Transformed NYC Taxi data created from local folder.
+version: 1
+type: uri_folder
+path: data_transformed/
+```
+
+The `path` value points to the `data_transformed` subdirectory, which contains the data that is shared using the registry.
+
+To create the data in the registry, use the `az ml data create`. In the following examples, replace `<registry-name>` with the name of your registry.
+
+```azurecli
+az ml data create --file data-registry.yml --registry-name <registry-name>
+```
+
+If you get an error that data with this name and version already exists in the registry, you can either edit the `version` field in `data-registry.yml` or specify a different version on the CLI that overrides the version value in `data-registry.yml`.
+
+```azurecli
+# use shell epoch time as the version
+version=$(date +%s)
+az ml data create --file data-registry.yml --registry-name <registry-name> --set version=$version
+```
+
+> [!TIP]
+> If the `version=$(date +%s)` command doesn't set the `$version` variable in your environment, replace `$version` with a random number.
+
+Save the `name` and `version` of the data from the output of the `az ml data create` command and use them with `az ml data show` command to view details for the asset.
+
+```azurecli
+az ml data show --name transformed-nyc-taxt-data --version 1 --registry-name <registry-name>
+```
+
+> [!TIP]
+> If you used a different data name or version, replace the `--name` and `--version` parameters accordingly.
+
+ You can also use `az ml data list --registry-name <registry-name>` to list all data assets in the registry.
+
+# [Python SDK](#tab/python)
+
+> [!TIP]
+> The same `MLClient.environmentsdata.create_or_update()` can be used to create data in either a workspace or a registry depending on the target it has been initialized with. Since you work wth both workspace and registry in this document, you have initialized `ml_client_workspace` and `ml_client_registry` to work with workspace and registry respectively.
++
+The source data directory `data_transformed` is available in `cli/jobs/pipelines-with-components/nyc_taxi_data_regression/`. Initialize the data object and create the data.
+
+```python
+my_path = "./data_transformed/"
+my_data = Data(path=my_path,
+ type=AssetTypes.URI_FOLDER,
+ description="Transformed NYC Taxi data created from local folder.",
+ name="transformed-nyc-taxt-data",
+ version='1')
+ml_client_registry.data.create_or_update(my_data)
+```
+
+> [!TIP]
+> If you get an error that an data with this name and version already exists in the registry, specify a different version for the `version` parameter.
+
+Note down the `name` and `version` of the data from the output and pass them to the `ml_client_registry.data.get()` method to fetch the data from registry.
+
+You can also use `ml_client_registry.data.list()` to list all data assets in the registry.
++
+
+## Create an environment and component in registry
+
+To create an environment and component in the registry, use the steps in the [How to share models, components, and environments](how-to-share-models-pipelines-across-workspaces-with-registries.md) article. The environment and component are used in the training job in next section.
+
+> [!TIP]
+> You can use an environment and component from the workspace instead of using ones from the registry.
+
+## Run a pipeline job in a workspace using component from registry
+
+When running a pipeline job that uses a component and data from a registry, the *compute* resources are local to the workspace. In the following example, the job uses the Scikit Learn training component and the data asset created in the previous sections to train a model.
+
+> [!NOTE]
+> The key aspect is that this pipeline is going to run in a workspace using training data that isn't in the specific workspace. The data is in a registry that can be used with any workspace in your organization. You can run this training job in any workspace you have access to without having worry about making the training data available in that workspace.
+
+# [Azure CLI](#tab/cli)
+
+Verify that you are in the `cli/jobs/pipelines-with-components/nyc_taxi_data_regression` directory. Edit the `component` section in under the `train_job` section of the `single-job-pipeline.yml` file to refer to the training component and `path` under `training_data` section to refer to data asset created in the previous sections. The following example shows what the `single-job-pipeline.yml` looks like after editing. Replace the `<registry_name>` with the name for your registry:
+
+```YAML
+$schema: https://azuremlschemas.azureedge.net/latest/pipelineJob.schema.json
+type: pipeline
+display_name: nyc_taxi_data_regression_single_job
+description: Single job pipeline to train regression model based on nyc taxi dataset
+
+jobs:
+ train_job:
+ type: command
+ component: azureml://registries/<registry-name>/component/train_linear_regression_model/versions/1
+ compute: azureml:cpu-cluster
+ inputs:
+ training_data:
+ type: uri_folder
+ path: azureml://registries/<registry-name>/data/transformed-nyc-taxt-data/versions/1
+ outputs:
+ model_output:
+ type: mlflow_model
+ test_data:
+```
+
+> [!WARNING]
+> * Before running the pipeline job, confirm that the workspace in which you will run the job is in a Azure region that is supported by the registry in which you created the data.
+> * Confirm that the workspace has a compute cluster with the name `cpu-cluster` or edit the `compute` field under `jobs.train_job.compute` with the name of your compute.
+
+Run the pipeline job with the `az ml job create` command.
+
+```azurecli
+az ml job create --file single-job-pipeline.yml
+```
+
+> [!TIP]
+> If you have not configured the default workspace and resource group as explained in the prerequisites section, you will need to specify the `--workspace-name` and `--resource-group` parameters for the `az ml job create` to work.
+
+For more information on running jobs, see the following articles:
+
+* [Running jobs (CLI)](./how-to-train-cli.md)
+* [Pipeline jobs with components (CLI)](./how-to-create-component-pipelines-cli.md)
+
+# [Python SDK](#tab/python)
++
+```Python
+# get the data asset
+data_asset_from_registry = ml_client_registry.data.get(name="transformed-nyc-taxt-data", version="1")
+
+@pipeline()
+def pipeline_with_registered_components(
+ training_data
+):
+ train_job = train_component_from_registry(
+ training_data=training_data,
+ )
+pipeline_job = pipeline_with_registered_components(
+ training_data=Input(type="uri_folder", path=data_asset_from_registry.id"),
+)
+pipeline_job.settings.default_compute = "cpu-cluster"
+print(pipeline_job)
+```
+
+> [!WARNING]
+> * Confirm that the workspace in which you will run this job is in a Azure location that is supported by the registry in which you created the component before you run the pipeline job.
+> * Confirm that the workspace has a compute cluster with the name `cpu-cluster` or update it `pipeline_job.settings.default_compute=<compute-cluster-name>`.
+
+Run the pipeline job and wait for it to complete.
+
+```python
+pipeline_job = ml_client_workspace.jobs.create_or_update(
+ pipeline_job, experiment_name="sdk_job_data_from_registry" , skip_validation=True
+)
+ml_client_workspace.jobs.stream(pipeline_job.name)
+pipeline_job=ml_client_workspace.jobs.get(pipeline_job.name)
+pipeline_job
+```
+
+> [!TIP]
+> Notice that you are using `ml_client_workspace` to run the pipeline job whereas you had used `ml_client_registry` to use create environment and component.
+
+Since the component used in the training job is shared through a registry, you can submit the job to any workspace that you have access to in your organization, even across different subscriptions. For example, if you have `dev-workspace`, `test-workspace` and `prod-workspace`, you can connect to those workspaces and resubmit the job.
+
+For more information on running jobs, see the following articles:
+
+* [Running jobs (SDK)](./how-to-train-sdk.md)
+* [Pipeline jobs with components (SDK)](./how-to-create-component-pipeline-python.md)
+++
+### Share data from workspace to registry
+
+The following steps show how to share an existing data asset from a workspace to a registry.
+
+# [Azure CLI](#tab/cli)
+
+First, create a data asset in the workspace. Make sure that you are in the `cli/assets/data` directory. The `local-folder.yml` located in this directory is used to create a data asset in the workspace. The data specified in this file is available in the `cli/assets/data/sample-data` directory. The following YAML is the contents of the `local-folder.yml` file:
+
+```YAML
+$schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+name: local-folder-example-titanic
+description: Dataset created from local folder.
+type: uri_folder
+path: sample-data/
+```
+
+To create the data asset in the workspace, use the following command:
+
+```azurecli
+az ml data create -f local-folder.yml
+```
+
+For more information on creating data assets in a workspace, see [How to create data assets](how-to-create-data-assets.md).
+
+The data asset created in the workspace can be shared to a registry. From the registry, it can be used in multiple workspaces. You can also change the name and version when sharing the data from workspace to registry. Sharing a data asset from a workspace to a registry uses the `--path` parameter to reference the data asset to be shared. Valid path formats are:
+
+* `azureml://subscriptions/<subscription-id>/resourcegroup/<resource-group-name>/data/<data-asset-name>/versions/<version-number>`
+* `azureml://resourcegroup/<resource-group-name>/data/<data-asset-name>/versions/<version-number>`
+* `azureml://data/<data-asset-name>/versions/<version-number>`
+
+The following example demonstrates using the `--path` parameter to share a data asset. Replace `<registry-name>` with the name of the registry that the data will be shared to. Replace `<resourceGroupName>` with the name of the resource group that contains the Azure Machine Learning workspace where the data asset is registered:
+
+```azurecli
+az ml data create --registry-name <registry-name> --path azureml://resourcegroup/<resourceGroupName>/data/local-folder-example-titanic/versions/1
+```
+
+# [Python SDK](#tab/python)
+
+First, create a data asset in the workspace. Make sure that you are in `sdk/assets/data` directory. The data is available in the `sdk/assets/data/sample-data` directory.
+
+```python
+my_path = "./sample-data/"
+my_data = Data(path=my_path,
+ type=AssetTypes.URI_FOLDER,
+ description="",
+ name="titanic-dataset",
+ version='1')
+ml_client_workspace.data.create_or_update(my_data)
+
+```
+
+For more information on creating data assets in a workspace, see [How to create data assets](how-to-create-data-assets.md).
+
+The data asset created in workspace can be shared to a registry and it can be used in multiple workspaces from there. You can also change the name and version when sharing the data from workspace to registry.
+
+```python
+# Fetch the data from the workspace
+data_in_workspace = ml_client_workspace.data.get(name="titanic-dataset", version="1")
+print("data from workspace:\n\n", data_in_workspace)
+
+# Change the format to one that the registry understands:
+# Note the asset ID when printing the `data_ready_to_copy` object.
+data_ready_to_copy = ml_client_workspace.data._prepare_to_copy(data_in_workspace)
+print("\n\ndata ready to copy:\n\n", data_ready_to_copy)
+
+# Copy the data from the workspace to the registry
+ml_client_registry.data.create_or_update(data_ready_to_copy).wait()
+```
+++++
+## Next steps
+
+* [How to create and manage registries](how-to-manage-registries.md)
+* [How to manage environments](how-to-manage-environments-v2.md)
+* [How to train models](how-to-train-cli.md)
+* [How to create pipelines using components](how-to-create-component-pipeline-python.md)
machine-learning How To Share Models Pipelines Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-models-pipelines-across-workspaces-with-registries.md
description: Learn how practice cross-workspace MLOps and collaborate across tea
--++ Last updated 09/21/2022
-# Share models, components and environments across workspaces with registries (preview)
+# Share models, components, and environments across workspaces with registries (preview)
Azure Machine Learning registry (preview) enables you to collaborate across workspaces within your organization. Using registries, you can share models, components, and environments.
ml_client_registry.components.create_or_update(train_model)
Note down the `name` and `version` of the component from the output and pass them to the `ml_client_registry.component.get()` method to fetch the component from registry.
-You can also use `ml_client_registry.component.list()` to list all components in the registry or browse all components in the Azure Machine Learning Studio UI. Make sure you navigate to the global UI and look for the Registries hub.
+You can also use `ml_client_registry.component.list()` to list all components in the registry or browse all components in the Azure Machine Learning studio UI. Make sure you navigate to the global UI and look for the Registries hub.
Note down the `name` and `version` of the model from the output of the `az ml mo
az ml model show --name <model_name> --version <model_version> --registry-name <registry-name> ```
-You can also use `az ml model list --registry-name <registry-name>` to list all models in the registry or browse all components in the Azure Machine Learning Studio UI. Make sure you navigate to the global UI and look for the Registries hub.
+You can also use `az ml model list --registry-name <registry-name>` to list all models in the registry or browse all components in the Azure Machine Learning studio UI. Make sure you navigate to the global UI and look for the Registries hub.
# [Python SDK](#tab/python)
Note down the `name` and `version` of the model from the output and use them wit
mlflow_model_from_registry = ml_client_registry.models.get(name="nyc-taxi-model", version=str(1)) print(mlflow_model_from_registry) ```
-You can also use `ml_client_registry.models.list()` to list all models in the registry or browse all components in the Azure Machine Learning Studio UI. Make sure you navigate to the global UI and look for the Registries hub.
+You can also use `ml_client_registry.models.list()` to list all models in the registry or browse all components in the Azure Machine Learning studio UI. Make sure you navigate to the global UI and look for the Registries hub.
ml_client_workspace.online_endpoints.begin_delete(name=online_endpoint_name)
## Next steps
+* [How to share data assets using registries](how-to-share-data-across-workspaces-with-registries.md)
* [How to create and manage registries](how-to-manage-registries.md) * [How to manage environments](how-to-manage-environments-v2.md) * [How to train models](how-to-train-cli.md)
machine-learning Resource Limits Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-limits-capacity.md
This section lists basic limits and throttling thresholds in Azure Machine Learn
| Number of input datasets |200 | | Number of output datasets |20 |
+## Custom environments
+| Limit | Value |
+| | |
+| Number of files in Docker build context | 100 |
+| Total files size in Docker build context | 1 MB |
+ ## Metrics | Limit | Value | | | |
machine-learning How To Configure Environment V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-environment-v1.md
Create one anytime from within your Azure Machine Learning workspace. Provide ju
To learn more about compute instances, including how to install packages, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-manage-compute-instance.md). > [!TIP]
-> To prevent incurring charges for an unused compute instance, [stop the compute instance](../how-to-create-manage-compute-instance.md#manage). Or [enable idle shutdown](../how-to-create-manage-compute-instance.md#enable-idle-shutdown-preview) for the compute instance.
+> To prevent incurring charges for an unused compute instance, [stop the compute instance](../how-to-create-manage-compute-instance.md#manage). Or [enable idle shutdown](../how-to-create-manage-compute-instance.md#enable-idle-shutdown) for the compute instance.
In addition to a Jupyter Notebook server and JupyterLab, you can use compute instances in the [integrated notebook feature inside of Azure Machine Learning studio](../how-to-run-jupyter-notebooks.md).
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
1. Certs signed by a CA. This can be a self-signed CA or even a public one. In this case we need the root CA certificate (refer to instructions on [preparing SSL certificates for production](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLCertWithCA.html)), and all intermediaries (if applicable).
- Optionally, if you want to implement client-to-node certificate authentication as well, you need to provide the certificates in the same format when creating the hybrid cluster. See Azure CLI sample below - the certificates are provided in the `--client-certificates` parameter. This will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit cassandra.yaml settings).
+ Optionally, if you want to implement client-to-node certificate authentication or mutual Transport Layer Security (mTLS) as well, you need to provide the certificates in the same format as when creating the hybrid cluster. See Azure CLI sample below - the certificates are provided in the `--client-certificates` parameter. This will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit cassandra.yaml settings). Once applied, your cluster will require Cassandra to verify the certificates when a client connects (see `require_client_auth: true` in Cassandra [client_encryption_options](https://cassandra.apache.org/doc/latest/cassandra/configuration/cass_yaml_file.html#client_encryption_options )).
> [!NOTE] > The value of the `delegatedManagementSubnetId` variable you will supply below is exactly the same as the value of `--scope` that you supplied in the command above:
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
Configuring client certificates is optional. In general, there are two ways of c
- Self signed certs. This means a private and public (no CA) certificate for each node - in this case we need all public certificates. - Certs signed by a CA. This can be a self-signed CA or even a public one. In this case we need the root CA certificate (refer to [instructions on preparing SSL certificates](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLCertWithCA.html) for production), and all intermediaries (if applicable).
-If you want to implement client-to-node certificate authentication, you need to provide the certificates via Azure CLI. The below command will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit `cassandra.yaml` settings).
+If you want to implement client-to-node certificate authentication or mutual Transport Layer Security (mTLS), you need to provide the certificates via Azure CLI. The below command will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit `cassandra.yaml` settings). Once applied, your cluster will require Cassandra to verify the certificates when a client connects (see `require_client_auth: true` in Cassandra [client_encryption_options](https://cassandra.apache.org/doc/latest/cassandra/configuration/cass_yaml_file.html#client_encryption_options )).
```azurecli-interactive resourceGroupName='<Resource_Group_Name>'
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
Configuring client certificates is optional. In general, there are two ways of c
- Self signed certs. This means a private and public (no CA) certificate for each node - in this case we need all public certificates. - Certs signed by a CA. This can be a self-signed CA or even a public one. In this case we need the root CA certificate (refer to [instructions on preparing SSL certificates](https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/configuration/secureSSLCertWithCA.html) for production), and all intermediaries (if applicable).
-If you want to implement client-to-node certificate authentication, you need to provide the certificates via Azure CLI. The below command will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit `cassandra.yaml` settings).
+If you want to implement client-to-node certificate authentication or mutual Transport Layer Security (mTLS), you need to provide the certificates via Azure CLI. The below command will upload and apply your client certificates to the truststore for your Cassandra Managed Instance cluster (i.e. you do not need to edit `cassandra.yaml` settings). Once applied, your cluster will require Cassandra to verify the certificates when a client connects (see `require_client_auth: true` in Cassandra [client_encryption_options](https://cassandra.apache.org/doc/latest/cassandra/configuration/cass_yaml_file.html#client_encryption_options )).
```azurecli-interactive resourceGroupName='<Resource_Group_Name>'
migrate Common Questions Discovery Dependency Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-discovery-dependency-analysis.md
ms. Previously updated : 12/13/2022 Last updated : 02/28/2023+ # Discovery and dependency analysis - Common questions
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
ms. Previously updated : 12/12/2022 Last updated : 03/06/2023
migrate Concepts Azure Sql Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-azure-sql-assessment-calculation.md
description: Learn about Azure SQL assessments in Azure Migrate Discovery and as
Previously updated : 02/24/2023 Last updated : 03/15/2023
Azure SQL Database sizing | **Purchase model** | Defaulted to *vCore*.
Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*. High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) of the Target location. In an unlikely event when the chosen Target location doesn't yet have such a pair, the specified Target location itself is chosen as the default disaster recovery region. High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost.
-High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound Internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to#cloud-witness) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound Internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
+High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound Internet access from Azure VMs. This allows the use of [Cloud Witness](https://learn.microsoft.com/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?view=azuresql&tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound Internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out. [Review the best practices](best-practices-assessment.md) for creating an assessment with Azure Migrate.
High availability and disaster recovery properties | **Async commit mode intent*
Readiness checks for different migration strategies: #### Recommended deployment, Instances to SQL Server on Azure VM, Instances to Azure SQL MI, Database to Azure SQL DB:
-Azure SQL readiness for SQL instances and databases is based on a feature compatibility check with SQL Server on Azure VM, [Azure SQL Database](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules), and [Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-sql-managed-instance-assessment-rules):
+Azure SQL readiness for SQL instances and databases is based on a feature compatibility check with SQL Server on Azure VM, [Azure SQL Database](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql), and [Azure SQL Managed Instance](https://learn.microsoft.com/azure/azure-sql/migration-guides/managed-instance/sql-server-to-sql-managed-instance-assessment-rules?view=azuresql):
1. The Azure SQL assessment considers the SQL Server instance features that are currently used by the source SQL Server workloads (SQL Agent jobs, linked servers, etc.) and the user databases schemas (tables, views, triggers, stored procedures etc.) to identify compatibility issues. 1. If there are no compatibility issues found, the instance is marked as **Ready** for the target deployment type (SQL Server on Azure VM or Azure SQL Database or Azure SQL Managed Instance) 1. If there are non-critical compatibility issues, such as deprecated or unsupported features that don't block the migration to a specific target deployment type, the instance is marked as **Ready** (hyperlinked) with **warning** details and recommended remediation guidance. This includes the situation where the source data has an Always On Availability Group configuration and the required replicas exceed those available with the specific target deployment type.
migrate Concepts Dependency Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-dependency-visualization.md
ms. Previously updated : 12/12/2022 Last updated : 03/08/2023
migrate Concepts Migration Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md
Previously updated : 07/14/2022 Last updated : 02/28/2023
migrate How To Create Azure Sql Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md
description: Learn how to assess SQL instances for migration to Azure SQL Manage
Previously updated : 02/24/2023 Last updated : 03/15/2023
Run an assessment as follows:
Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*. High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region. High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost.
- High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to.md?view=azuresql&tabs=powershell#cloud-witness) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
+ High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](https://learn.microsoft.com/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?view=azuresql&tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out. 1. Select **Save** if you made changes.
migrate How To Create Group Machine Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-group-machine-dependencies.md
ms. Previously updated : 10/05/2021 Last updated : 03/08/2023
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
ms. Previously updated : 03/03/2023 Last updated : 03/08/2023
+ms.cutom: engagement-fy23
# Support matrix for Hyper-V assessment
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
ms. Previously updated : 02/24/2023 Last updated : 03/08/2023+ # Support matrix for VMware discovery
migrate Migrate V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-v1.md
ms. Previously updated : 05/02/2022 Last updated : 03/08/2023
migrate Onboard To Azure Arc With Azure Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/onboard-to-azure-arc-with-azure-migrate.md
description: Onboard on-premises servers in VMware virtual environment to Azure
Previously updated : 10/10/2022 Last updated : 01/31/2023
migrate Troubleshoot Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-dependencies.md
ms. Previously updated : 10/17/2022 Last updated : 03/08/2023
migrate Troubleshoot Webapps Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-webapps-migration.md
description: Troubleshoot web apps migration issues
Previously updated : 12/01/2022 Last updated : 02/28/2023
migrate Tutorial Assess Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md
description: Learn how to create assessment for Azure SQL in Azure Migrate
Previously updated : 08/05/2022 Last updated : 03/15/2023
Run an assessment as follows:
Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*. High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region. High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost.
- High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to#cloud-witness) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
+ High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](https://learn.microsoft.com/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?view=azuresql&tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out. 1. Select **Save** if you made changes.
migrate Tutorial Assess Vmware Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vmware-solution.md
ms. Previously updated : 5/2/2022 Last updated : 03/15/2023 #Customer intent: As a VMware VM admin, I want to assess my VMware VMs in preparation for migration to Azure VMware Solution (AVS)
Run an assessment as follows:
Azure SQL Database sizing | **Compute tier** | Defaulted to *Provisioned*. High availability and disaster recovery properties | **Disaster recovery region** | Defaulted to the [cross-region replication pair](../reliability/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies) of the Target Location. In the unlikely event that the chosen Target Location doesn't yet have such a pair, the specified Target Location itself is chosen as the default disaster recovery region. High availability and disaster recovery properties | **Multi-subnet intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you want asynchronous data replication where some replication delays are tolerable. This allows higher durability using geo-redundancy. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you desire the data replication to be synchronous and no data loss due to replication delay is allowable. This setting allows assessment to leverage built-in high availability options in Azure SQL Databases and Azure SQL Managed Instances, and availability zones and zone-redundancy in Azure Virtual Machines to provide higher availability. In the event of failover, no data is lost.
- High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to#cloud-witness) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
+ High availability and disaster recovery properties | **Internet Access** | Defaulted to Available.<br/><br/> Select **Available** if you allow outbound internet access from Azure VMs. This allows the use of [Cloud Witness](https://learn.microsoft.com/azure/azure-sql/virtual-machines/windows/hadr-cluster-quorum-configure-how-to?view=azuresql&tabs=powershell) which is the recommended approach for Windows Server Failover Clusters in Azure Virtual Machines. <br/><br/> Select **Not available** if the Azure VMs have no outbound internet access. This requires the use of a Shared Disk as a witness for Windows Server Failover Clusters in Azure Virtual Machines.
High availability and disaster recovery properties | **Async commit mode intent** | Defaulted to Disaster recovery. <br/><br/> Select **Disaster recovery** if you're using asynchronous commit availability mode to enable higher durability for the data without affecting performance. In the event of failover, data that hasn't yet been replicated may be lost. <br/><br/> Select **High availability** if you're using asynchronous commit data availability mode to improve availability and scale out read traffic. This setting allows assessment to leverage built-in high availability features in Azure SQL Databases, Azure SQL Managed Instances, and Azure Virtual Machines to provide higher availability and scale out. 1. Select **Save** if you make changes.
migrate Tutorial Assess Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps.md
description: Learn how to create assessment for Azure App Service in Azure Migra
Previously updated : 06/27/2022 Last updated : 02/28/2023
migrate Tutorial Modernize Asp Net Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-modernize-asp-net-aks.md
Previously updated : 01/01/2023 Last updated : 02/28/2023
migrate Tutorial Modernize Asp Net Appservice Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-modernize-asp-net-appservice-code.md
Previously updated : 08/09/2022 Last updated : 02/28/2023
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 02/16/2023 Last updated : 03/03/2023
mysql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connectivity-architecture.md
As client connects to the database, the connection string to the server resolves
The gateway service is hosted on group of stateless compute nodes sitting behind an IP address, which your client would reach first when trying to connect to an Azure Database for MySQL server.
-As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal, three months in advance before decommissioning. The decommissioning of gateways can impact the connectivity to your servers if
+As part of ongoing service maintenance, we'll periodically refresh compute hardware hosting the gateways to ensure we provide the most secure and performant experience. When the gateway hardware is refreshed, a new ring of the compute nodes is built out first. This new ring serves the traffic for all the newly created Azure Database for MySQL servers and it will have a different IP address from older gateway rings in the same region to differentiate the traffic. Once the new ring is fully functional, the older gateway hardware serving existing servers are planned for decommissioning. Before decommissioning a gateway hardware, customers running their servers and connecting to older gateway rings will be notified via email and in the Azure portal. The decommissioning of gateways can impact the connectivity to your servers if
* You hard code the gateway IP addresses in the connection string of your application. It is **not recommended**. You should use fully qualified domain name (FQDN) of your server in the format `<servername>.mysql.database.azure.com`, in the connection string for your application. * You don't update the newer gateway IP addresses in the client-side firewall to allow outbound traffic to be able to reach our new gateway rings.
The following table lists the gateway IP addresses of the Azure Database for MyS
| **Region name** | **Gateway IP addresses** | **Gateway IP addresses (decommissioning)** | **Gateway IP addresses (decommissioned)** | |||--|--|
-| Australia Central | 20.36.105.0 | | |
+| Australia Central | 20.36.105.32 | 20.36.105.0 | |
| Australia Central2 | 20.36.113.0 | | | | Australia East | 13.75.149.87, 40.79.161.1 | | | | Australia South East | 13.73.109.251, 13.77.49.32, 13.77.48.10 | | | | Brazil South | 191.233.201.8, 191.233.200.16 | | 104.41.11.5 | | Canada Central | 13.71.168.32|| 40.85.224.249, 52.228.35.221 |
-| Canada East | 40.86.226.166, 52.242.30.154 | | |
+| Canada East | 40.86.226.166, 40.69.105.32 | 52.242.30.154 | |
| Central US | 23.99.160.139, 52.182.136.37, 52.182.136.38 | 13.67.215.62 | |
-| China East | 139.219.130.35 | | |
+| China East | 52.130.112.139 | 139.219.130.35 | |
| China East 2 | 40.73.82.1, 52.130.120.89 | | China East 3 | 52.131.155.192 |
-| China North | 139.219.15.17 | | |
+| China North | 52.130.128.89 | 139.219.15.17 | |
| China North 2 | 40.73.50.0 | | | China North 3 | 52.131.27.192 | | | East Asia | 13.75.33.20, 52.175.33.150, 13.75.33.20, 13.75.33.21 | | |
payment-hsm Certification Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/certification-compliance.md
tags: azure-resource-manager
Previously updated : 01/25/2022 Last updated : 03/25/2023 # Certification and compliance
-The Azure Payment HSM service is PCI PIN, PCI DSS, and PCI 3DS compliant.
+Azure maintains the largest compliance portfolio in the industry. For details, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/), Each offering description provides an up to-date-scope statement and links to useful downloadable resources.
-- [Azure - PCI PIN - 2022 Package](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3?command=Download&downloadType=Document&downloadId=52eb9daa-f254-4914-aec6-46d40287a106) ΓÇô Microsoft Azure PCI PIN Attestation of Compliance (AOC) report for Azure Payment HSM.-- [Azure - PCI DSS - 2022 Package](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3?command=Download&downloadType=Document&downloadId=b9cc20e0-38db-4953-aa58-9fb5cce26cc2&tab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb&docTab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb_PCI_DSS) ΓÇô Contains the official PCI DSS certification reports and shared responsibility matrices. The PCI DSS AOC includes the full list of PCI DSS certified Azure offerings and regions. Customers can use Azure's PCI DSS AOC during their PCI DSS assessment.-- [Azure - PCI 3DS - 2022 Package](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3?command=Download&downloadType=Document&downloadId=45ade37c-753c-4392-8321-adc49ecad12c&tab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb&docTab=7027ead0-3d6b-11e9-b9e1-290b1eb4cdeb_PCI_DSS) ΓÇô Contains the official PCI 3DS certification report, shared responsibility matrix, and whitepaper. The PCI 3DS AOC includes the full list of PCI 3DS certified Azure offerings and regions. Customers can use AzureΓÇÖs PCI 3DS AOC during their PCI 3DS assessment.
+Azure payment HSM meets following compliance standards:
+
+- PCI DSS
+- PCI PIN
+- PCI 3DS
+- CSA STAR Certification
+- CSA STAR Attestation
+- ISO 20000-1:2018
+- ISO 22301:2019
+- ISO 27001:2013
+- ISO 27017:2015
+- ISO 27018:2019
+- ISO 27701:2019
+- ISO 9001:2015
+- SOC 1, 2, 3
+- Germany C5
+
+To download latest certification and attestation reports, please go to [Service Trust Portal Home Page (microsoft.com)](https://servicetrust.microsoft.com/ViewPage/HomePageVNext)
+
+For example, the latest PCI certification reports and shared responsibility matrices are:
+- [Azure PCI PIN V3.1](https://servicetrust.microsoft.com/DocumentPage/52eb9daa-f254-4914-aec6-46d40287a106) (2022-09-16)
+- [Azure PCI DSS V4.0](https://servicetrust.microsoft.com/DocumentPage/3be58cb9-de55-426b-9c3d-0ba90dd29572) (2023-03-07)
+- [Azure PCI 3DS V1.0](https://servicetrust.microsoft.com/DocumentPage/a9fe4984-3c73-4abf-bf88-a197c3821690) (2023-03-07)
Thales payShield 10K HSMs are certified to FIPS 140-2 Level 3 and PCI HSM v3.
payment-hsm Create Payment Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-payment-hsm.md
In this tutorial, you learn how to:
- You must register the "Microsoft.HardwareSecurityModules" and "Microsoft.Network" resource providers, as well as the Azure Payment HSM features. Steps for doing so are at [Register the Azure Payment HSM resource provider and resource provider features](register-payment-hsm-resource-providers.md).
+ > [!WARNING]
+ > You must apply the "FastPathEnabled" feature flag to **every** subscription ID, and add the "fastpathenabled" tag to **every** virtual network. For more details, see [Fastpathenabled](fastpathenabled.md).
+ To quickly ascertain if the resource providers and features are already registered, use the Azure CLI [az provider show](/cli/azure/provider#az-provider-show) command. (You will find the output of this command more readable if you display it in table-format.) ```azurecli-interactive
In this tutorial, you learn how to:
- You must register the "Microsoft.HardwareSecurityModules" and "Microsoft.Network" resource providers, as well as the Azure Payment HSM features. Steps for doing so are at [Register the Azure Payment HSM resource provider and resource provider features](register-payment-hsm-resource-providers.md).
+ > [!WARNING]
+ > You must apply the "FastPathEnabled" feature flag to **every** subscription ID, and add the "fastpathenabled" tag to **every** virtual network. For more details, see [Fastpathenabled](fastpathenabled.md).
+ To quickly ascertain if the resource providers and features are already registered, use the Azure PowerShell [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) cmdlet: ```azurepowershell-interactive
payment-hsm Deployment Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/deployment-scenarios.md
tags: azure-resource-manager
Previously updated : 12/01/2022 Last updated : 03/25/2023
payment-hsm Fastpathenabled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/fastpathenabled.md
+
+ Title: Azure Payment HSM "fastpathenabled" feature flag and tag
+description: The "fastpathenabled" feature flag and tag, as it relates to Azure Payment HSM and affiliated subscriptions and virtual networks
+++
+tags: azure-resource-manager
+++ Last updated : 03/25/2023++++
+# Fastpathenabled
+
+Azure Payment HSM uses the term "Fastpathenabled" in two related but distinct ways:
+
+- "FastPathEnabled" is an Azure Feature Exposure Control (AFEC) flag. It must be applied to **every** subscription ID that wants to access to Azure Payment HSM.
+- "fastpathenabled" (always lowercased) is a virtual network tag. It must be added to the virtual network hosting the payment HSM's delegated subnet, as well as to **every** peered VNet requiring connectivity to the payment HSM.
+
+Adding the ΓÇ£FastPathEnabledΓÇ¥ feature flag and enabling the ΓÇ£fastpathenabledΓÇ¥ tag don't cause any downtime.
+
+### Subscriptions
+
+The "FastPathEnabled" feature flag must be added/registered to all subscriptions IDs that need access to Azure Payment HSM. To apply the "FastPathEnabled" feature flag, see [Register the resource providers and features](register-payment-hsm-resource-providers.md).
+
+> [!IMPORTANT]
+> After registering the "FastPathEnabled" feature flag, you **must** contact the [Azure Payment HSM support team](support-guide.md#microsoft-support) team to have your registration approved. In your message to Microsoft support, include the subscription IDs of **every** subscription that needs access to Azure Payment HSM.
+
+### Virtual networks
+
+The "fastpathenabled" tag must be added to every virtual network connecting to the payment HSM's delegated subnet. In a Hub and Spoke topology, the "fastpathenabled" tag must be added to both the central Hub VNet and the peered Spoke VNet containing the payment HSM.
+
+The "fastpathenabled" tag isn't required on nondirectly peered VNets reaching the Payment HSM's VNet via a Central hub.
+
+> [!WARNING]
+> Adding the "fastpathenabled" tag through the Azure portal is insufficient—it must be done from the commandline. To do so, follow the steps outlined in [How to peer Azure Payment HSM virtual networks](peer-vnets.md?tabs=azure-cli).
+
+### Virtual Network NAT scenario
+
+For a Virtual Network NAT scenario, you must add the "fastpathenabled" tag with a value of `True` when creating the NAT gateway (not after the NAT gateway is created).
+
+## Next steps
+
+- Learn more about [Azure Payment HSM](overview.md)
+- [Register the resource providers and features](register-payment-hsm-resource-providers.md)
+- [How to peer Azure Payment HSM virtual networks](peer-vnets.md?tabs=azure-cli)
+- [Get started with Azure Payment HSM](getting-started.md)
+- [Create a payment HSM](create-payment-hsm.md)
+- Read the [frequently asked questions](faq.yml)
payment-hsm Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-cli.md
ms.devlang: azurecli Previously updated : 09/12/2022 Last updated : 03/25/2023 # Quickstart: Create an Azure Payment HSM with the Azure CLI
This article describes how to create, update, and delete an Azure Payment HSM by
- You must register the "Microsoft.HardwareSecurityModules" and "Microsoft.Network" resource providers, as well as the Azure Payment HSM features. Steps for doing so are at [Register the Azure Payment HSM resource provider and resource provider features](register-payment-hsm-resource-providers.md).
- To quickly ascertain if the resource providers and features are already registered, use the Azure CLI [az provider show](/cli/azure/provider#az-provider-show) command. (You will find the output of this command more readable if you display it in table-format.)
+ > [!WARNING]
+ > You must apply the "FastPathEnabled" feature flag to **every** subscription ID, and add the "fastpathenabled" tag to **every** virtual network. For more information, see [Fastpathenabled](fastpathenabled.md).
+
+ To quickly ascertain if the resource providers and features are already registered, use the Azure CLI [az provider show](/cli/azure/provider#az-provider-show) command. (The output of this command is more readable if you display it in table-format.)
```azurecli-interactive az provider show --namespace "Microsoft.HardwareSecurityModules" -o table
To verify that the VNet and subnet were created correctly, use the Azure CLI [az
az network vnet show -n "myVNet" -g "myResourceGroup" ```
-Make note of the value returned as "id", as you will need it for the next step. The "id" will be in the format:
+Make note of the value returned as `id`, as it is used in the next step. The `id` is in the format:
```json "id": "/subscriptions/<subscriptionID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/myPHSMSubnet",
To see your payment HSM and its properties, use the Azure CLI [az dedicated-hsm
az dedicated-hsm show --resource-group "myResourceGroup" --name "myPaymentHSM" ```
-To list all of your payment HSMs, use the [az dedicated-hsm list](/cli/azure/dedicated-hsm#az-dedicated-hsm-list) command. (You will find the output of this command more readable if you display it in table-format.)
+To list all of your payment HSMs, use the [az dedicated-hsm list](/cli/azure/dedicated-hsm#az-dedicated-hsm-list) command. (The output of this command is more readable if you display it in table-format.)
```azurecli-interactive az dedicated-hsm list --resource-group "myResourceGroup" -o table
az dedicated-hsm delete --name "myPaymentHSM" -g "myResourceGroup"
## Next steps
-In this quickstart, you created a payment HSM, viewed and updated its properties, and deleted it. To learn more about Payment HSM and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a payment HSM, viewed and updated its properties, and deleted it. To learn more about Payment HSM and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Payment HSM](overview.md) - Find out how to [get started with Azure Payment HSM](getting-started.md)
payment-hsm Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-powershell.md
This article describes how you can create an Azure Payment HSM using the [Az.Ded
- You must register the "Microsoft.HardwareSecurityModules" and "Microsoft.Network" resource providers, as well as the Azure Payment HSM features. Steps for doing so are at [Register the Azure Payment HSM resource provider and resource provider features](register-payment-hsm-resource-providers.md).
+ > [!WARNING]
+ > You must apply the "FastPathEnabled" feature flag to **every** subscription ID, and add the "fastpathenabled" tag to **every** virtual network. For more information, see [Fastpathenabled](fastpathenabled.md).
+ To quickly ascertain if the resource providers and features are already registered, use the Azure PowerShell [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) cmdlet: ```azurepowershell-interactive
To verify that the VNet was created correctly, use the Azure PowerShell [Get-AzV
Get-AzVirtualNetwork -Name "myVNet" -ResourceGroupName "myResourceGroup" ```
-Make note of the value returned as "Id", as you will need it for the next step. The "Id" will be in the format:
+Make note of the value returned as `Id`, as it is used in the next step. The `Id` is in the format:
```json "Id": "/subscriptions/<subscriptionID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/myPHSMSubnet",
To create a payment HSM, use the [New-AzDedicatedHsm](/powershell/module/az.dedi
New-AzDedicatedHsm -Name "myPaymentHSM" -ResourceGroupName "myResourceGroup" -Location "East US" -Sku "payShield10K_LMK1_CPS60" -StampId "stamp1" -SubnetId "<subnet-id>" ```
-The output of the payment HSM creation will look like this:
+The output of payment HSM creation looks like this:
```Output Name Provisioning State SKU Location
Remove-AzDedicatedHsm -Name "myPaymentHSM" -ResourceGroupName "myResourceGroup"
## Next steps
-In this quickstart, you created a payment HSM, viewed and updated its properties, and deleted it. To learn more about Payment HSM and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a payment HSM, viewed and updated its properties, and deleted it. To learn more about Payment HSM and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Payment HSM](overview.md) - Find out how to [get started with Azure Payment HSM](getting-started.md)
payment-hsm Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-template.md
This article describes how to create a payment HSM with the host and management
- You must register the "Microsoft.HardwareSecurityModules" and "Microsoft.Network" resource providers, as well as the Azure Payment HSM features. Steps for doing so are at [Register the Azure Payment HSM resource provider and resource provider features](register-payment-hsm-resource-providers.md).
+ > [!WARNING]
+ > You must apply the "FastPathEnabled" feature flag to **every** subscription ID, and add the "fastpathenabled" tag to **every** virtual network. For more details, see [Fastpathenabled](fastpathenabled.md).
+ To quickly ascertain if the resource providers and features are already registered, use the Azure CLI [az provider show](/cli/azure/provider#az-provider-show) command. (You will find the output of this command more readable if you display it in table-format.) ```azurecli-interactive
payment-hsm Register Payment Hsm Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/register-payment-hsm-resource-providers.md
Previously updated : 09/12/2022 Last updated : 02/25/2023 # Register the Azure Payment HSM resource providers and resource provider features
az provider register --namespace "Microsoft.HardwareSecurityModules"
az feature registration create --namespace "Microsoft.HardwareSecurityModules" --name "AzureDedicatedHsm" ```
-You must also register the "Microsoft.Network" resource provider and the "FastPathEnabled" feature.
+You must also register the "Microsoft.Network" resource provider and the "FastPathEnabled" Azure Feature Exposure Control (AFEC) flag. For more information on the "FastPathEnabled" feature flag, see [Fathpathenabled](fastpathenabled.md).
```azurecli-interactive az provider register --namespace "Microsoft.Network"
az feature registration create --namespace "Microsoft.Network" --name "FastPathE
``` > [!IMPORTANT]
-> After registering the "FastPathEnabled" feature, you **must** contact the [Azure Payment HSM support team](support-guide.md#microsoft-support) team to have your registration approved. In your message to Microsoft support, include your subscription ID.
+> After registering the "FastPathEnabled" feature flag, you **must** contact the [Azure Payment HSM support team](support-guide.md#microsoft-support) team to have your registration approved. In your message to Microsoft support, include your subscription ID. If multiple subsciptions must connect with the payment HSM, you must include **all** the subscriopts IDs.
You can verify that your registrations are complete with the Azure CLI [az provider show](/cli/azure/provider#az-provider-show) command. (You will find the output of this command more readable if you display it in table-format.)
Register-AzResourceProvider -ProviderNamespace Microsoft.HardwareSecurityModules
Register-AzProviderFeature -FeatureName "AzureDedicatedHsm" -ProviderNamespace Microsoft.HardwareSecurityModules ```
-You must also register the "Microsoft.Network" resource provider and the "FastPathEnabled" feature.
+You must also register the "Microsoft.Network" resource provider and the "FastPathEnabled" Azure Feature Exposure Control (AFEC) flag. For more information on the "FastPathEnabled" feature flag, see [Fathpathenabled](fastpathenabled.md).
```azurepowershell-interactive Register-AzResourceProvider -ProviderNamespace Microsoft.Network
Register-AzProviderFeature -FeatureName "FastPathEnabled" -ProviderNamespace Mic
``` > [!IMPORTANT]
-> After registering the "FastPathEnabled" feature, you **must** contact the [Azure Payment HSM support team](support-guide.md#microsoft-support) team to have your registration approved. In your message to Microsoft support, include your subscription ID.
+> After registering the "FastPathEnabled" feature flag, you **must** contact the [Azure Payment HSM support team](support-guide.md#microsoft-support) team to have your registration approved. In your message to Microsoft support, include the subscription IDs of **every** subscription you want to connect to the payment HSM.
You can verify that your registrations are complete with the Azure PowerShell [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) cmdlet:
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
Last updated 10/21/2022
-# Read replicas in Azure Database for PostgreSQL - Flexible Server Preview
+# Read replicas in Azure Database for PostgreSQL - Flexible Server
-
-> [!NOTE]
-> Read replicas for PostgreSQL Flexible Server is currently in preview.
The read replica feature allows you to replicate data from an Azure Database for PostgreSQL server to a read-only replica. Replicas are updated **asynchronously** with the PostgreSQL engine native physical replication technology. Streaming replication by using replication slots is the default operation mode. When necessary, file-based log shipping is used to catch up. You can replicate from the primary server to up to five replicas.
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
Last updated 10/14/2022
-# Create and manage read replicas in Azure Database for PostgreSQL - Flexible Server from the Azure portal Preview
+# Create and manage read replicas in Azure Database for PostgreSQL - Flexible Server from the Azure portal
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-> [!NOTE]
-> Read replicas for PostgreSQL Flexible Server is currently in preview.
- In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md). ## Prerequisites
postgresql How To Read Replicas Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-rest-api.md
Last updated 12/06/2022
-# Create and manage read replicas from the Azure REST API Preview
+# Create and manage read replicas from the Azure REST API
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL by using the REST API [Azure REST API](/rest/api/azure/). To learn more about read replicas, see the [overview](concepts-read-replicas.md).
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview-postgres-choose-server-options.md
The main differences between these options are listed in the following table:
| **OS and PostgreSQL patching** | - Customer managed | - Flexible Server ΓÇô Automatic with optional customer managed window | | **High availability** | - Customers architect, implement, test, and maintain high availability. Capabilities might include clustering, replication etc. | - Flexible Server: built-in | | **Zone Redundancy** | - Azure VMs can be set up to run in different availability zones. For an on-premises solution, customers must create, manage, and maintain their own secondary data center. | - Flexible Server: Yes |
-| **Hybrid Scenario** | - Customer managed | - Flexible Server: Not available during Preview |
+| **Hybrid Scenario** | - Customer managed | - Flexible Server: supported |
| **Backup and Restore** | - Customer Managed | - Flexible Server: built-in with user configuration on zone-redundant storage | | **Monitoring Database Operations** | - Customer Managed | - Flexible Server: All offer customers the ability to set alerts on the database operation and act upon reaching thresholds | | **Advanced Threat Protection** | - Customers must build this protection for themselves. | - Flexible Server: Not available during Preview |
-| **Disaster Recovery** | - Customer Managed | - Flexible Server: Not available during Preview |
-| **Intelligent Performance** | - Customer Managed | - Flexible Server: Not available during Preview |
+| **Disaster Recovery** | - Customer Managed | - Flexible Server: supported |
+| **Intelligent Performance** | - Customer Managed | - Flexible Server: supported |
## Total cost of ownership (TCO)
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Last updated 11/05/2022
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL ## Release: March 2023
+* General availability of [Read Replica](concepts-read-replicas.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
* Public preview of [PgBouncer Metrics](./concepts-monitoring.md#pgbouncer-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server. * General availability of [Azure Monitor workbooks](./concepts-workbooks.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
private-link Disable Private Endpoint Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-endpoint-network-policy.md
By default, network policies are disabled for a subnet in a virtual network. To
Network policies can be enabled either for Network Security Groups only, for User-Defined Routes only, or for both.
-If you enable network security policies for User-Defined Routes, the /32 routes that are generated by the private endpoint and propagated to all the subnets in its own VNet and directly peered VNets will be invalidated if you have User-Defined Routing, which is useful if you want all traffic (including traffic addressed to the private endpoint) to go through a firewall, since otherwise the /32 route would bypass any other route.
+If you enable network security policies for User-Defined Routes, the /32 routes that are generated by the private endpoint and propagated to all the subnets in its own VNet and directly peered VNets will be invalidated if you have User-Defined Routing, which is useful if you want all traffic (including traffic addressed to the private endpoint) to go through a firewall, since otherwise the /32 route would bypass any other route.
+
+> [!NOTE]
+> Unless you configure a UDR, the Private Endpoint Route of /32 will remain active. And for the UDR to work on all private endpoints within the subnet, you need to enable PrivateEndpointNetworkPolicies.
You can use the following to enable or disable the setting:
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
Previously updated : 11/01/2022 Last updated : 03/31/2023
When setting up scan, you can choose to scan an entire Teradata server, or scope
### Required permissions for scan
-Microsoft Purview supports basic authentication (username and password) for scanning Teradata. The Teradata user must have read access to system tables in order to access advanced metadata. For classification, user also needs to have read permission on the tables/views to retrieve sample data.
+Microsoft Purview supports basic authentication (username and password) for scanning Teradata. The user should have SELECT permission granted for every individual system table listed below:
+
+```sql
+grant select on dbc.tvm to [user];
+grant select on dbc.dbase to [user];
+grant select on dbc.tvfields to [user];
+grant select on dbc.udtinfo to [user];
+grant select on dbc.idcol to [user];
+grant select on dbc.udfinfo to [user];
+```
To retrieve data types of view columns, Microsoft Purview issues a prepare statement for `select * from <view>` for each of the view queries and parse the metadata that contains the data type details for better performance. It requires the SELECT data permission on views. If the permission is missing, view column data types will be skipped.
+For classification, user also needs to have read permission on the tables/views to retrieve sample data.
+ ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
purview Scanning Shir Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/scanning-shir-troubleshooting.md
+
+ Title: Troubleshoot SHIR self-hosted integration runtime
+
+description: Learn how to troubleshoot self-hosted integration runtime issues in Microsoft Purview.
+++++ Last updated : 04/01/2023+++
+# Troubleshoot Microsoft Purview SHIR self-hosted integration runtime
+
+APPLIES TO: :::image type="icon" source="media/yes.png" border="false":::Microsoft Purview :::image type="icon" source="media/yes.png" border="false":::Azure Data Factory :::image type="icon" source="media/yes.png" border="false":::Azure Synapse Analytics
+
+This article explores common troubleshooting methods for self-hosted integration runtime (SHIR) in Microsoft Purview, Azure Data Factory and Synapse workspaces.
+
+## Gather Microsoft Purview specific SHIR self-hosted integration runtime logs
+
+For failed activities that are running on a self-hosted IR or a shared IR, the service supports viewing and uploading error logs from the [Windows Event Viewer](https://learn.microsoft.com/shows/inside/event-viewer).
+To get support and troubleshooting guidance for SHIR issues, you may need to generate an error report and send it across to Microsoft. To generate the error report ID, follow the instructions here, and then enter the report ID to search for related known issues.
+
+1. Before starting the scan on the Microsoft Purview governance portal:
+- Navigate to the SHIR VM, or machine and open the Windows Event Viewer.
+- Clear the windows event viewer logs in the "Integration Runtime" section. Right-click on the logs and select clear logs option.
+- Navigate back to the Microsoft Purview governance portal and start the scan.
+- Once the scan shows status "Failed", navigate back to the SHIR VM, or machine and refresh the event viewer in the "Integration Runtime" section.
+- The activity logs are displayed for the failed scan run.
+
+ :::image type="content" source="media/scanning-shir-troubleshooting/shir-event-viewer-logs-ir.png" lightbox="media/scanning-shir-troubleshooting/shir-event-viewer-logs-ir.png" alt-text="Screenshot of the logs for the failed scan SHIR activity.":::
+
+1. For further assistance from Microsoft, select **Send Logs**.
+
+ The **Share the self-hosted integration runtime (SHIR) logs with Microsoft** window opens.
+
+ :::image type="content" source="media/scanning-shir-troubleshooting/shir-send-logs-ir.png" lightbox="media/scanning-shir-troubleshooting/shir-send-logs-ir.png" alt-text="Screenshot of the send logs button on the self-hosted integration runtime (SHIR) to upload logs to Microsoft.":::
+
+1. Select which logs you want to send.
+ * For a *self-hosted IR*, you can upload logs that are related to the failed activity or all logs on the self-hosted IR node.
+ * For a *shared IR*, you can upload only logs that are related to the failed activity.
+
+1. When the logs are uploaded, keep a record of the Report ID for later use if you need further assistance to solve the issue.
+
+ :::image type="content" source="media/scanning-shir-troubleshooting/shir-send-logs-complete.png" lightbox="media/scanning-shir-troubleshooting/shir-send-logs-complete.png" alt-text="Screenshot of the displayed report ID in the upload progress window for the Purview SHIR logs.":::
+
+> [!NOTE]
+> Log viewing and uploading requests are executed on all online self-hosted IR instances. If any logs are missing, make sure that all the self-hosted IR instances are online.
++
+## Self-hosted integration runtime SHIR general failure or error
+
+There are lots of common errors, warnings, issues between Purview SHIR and Azure Data Factory or Azure Synapse SHIR. If your SHIR issues aren't resolved at this stage, refer to the [Azure Data Factory ADF or Azure Synapse SHIR troubleshooting guide](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md)
++
+## Manage your Purview SHIR - next steps
+
+For more help with troubleshooting, try the following resources:
+
+* [Getting started with Microsoft Purview](https://azure.microsoft.com/products/purview/)
+* [Create and Manage SHIR Self-hosted integration runtimes in Purview](manage-integration-runtimes.md)
+* [Stack overflow forum for Microsoft Purview](https://stackoverflow.com/questions/tagged/azure-purview)
+* [Twitter information about Microsoft Purview](https://twitter.com/hashtag/Purview)
+* [Microsoft Purview Troubleshooting](frequently-asked-questions.yml)
++
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 02/24/2023 Last updated : 03/31/2023
The following table provides a brief description of each built-in role. Click th
> | [API Management Service Contributor](#api-management-service-contributor) | Can manage service and the APIs | 312a565d-c81f-4fd8-895a-4e21e48d571c | > | [API Management Service Operator Role](#api-management-service-operator-role) | Can manage service but not the APIs | e022efe7-f5ba-4159-bbe4-b44f577e9b61 | > | [API Management Service Reader Role](#api-management-service-reader-role) | Read-only access to service and APIs | 71522526-b88f-4d52-b57f-d31fc3546d0d |
+> | [API Management Service Workspace API Developer](#api-management-service-workspace-api-developer) | Has read access to tags and products and write access to allow: assigning APIs to products, assigning tags to products and APIs. This role should be assigned on the service scope. | 9565a273-41b9-4368-97d2-aeb0c976a9b3 |
+> | [API Management Service Workspace API Product Manager](#api-management-service-workspace-api-product-manager) | Has the same access as API Management Service Workspace API Developer as well as read access to users and write access to allow assigning users to groups. This role should be assigned on the service scope. | d59a3e9c-6d52-4a5a-aeed-6bf3cf0e31da |
+> | [API Management Workspace API Developer](#api-management-workspace-api-developer) | Has read access to entities in the workspace and read and write access to entities for editing APIs. This role should be assigned on the workspace scope. | 56328988-075d-4c6a-8766-d93edd6725b6 |
+> | [API Management Workspace API Product Manager](#api-management-workspace-api-product-manager) | Has read access to entities in the workspace and read and write access to entities for publishing APIs. This role should be assigned on the workspace scope. | 73c2c328-d004-4c5e-938c-35c6f5679a1f |
+> | [API Management Workspace Contributor](#api-management-workspace-contributor) | Can manage the workspace and view, but not modify its members. This role should be assigned on the workspace scope. | 0c34c906-8d99-4cb7-8bb7-33f5b0a1a799 |
+> | [API Management Workspace Reader](#api-management-workspace-reader) | Has read-only access to entities in the workspace. This role should be assigned on the workspace scope. | ef1c2c96-4a77-49e8-b9a4-6179fe1d2fd2 |
> | [App Configuration Data Owner](#app-configuration-data-owner) | Allows full access to App Configuration data. | 5ae67dd6-50cb-40e7-96ff-dc2bfa4b606b | > | [App Configuration Data Reader](#app-configuration-data-reader) | Allows read access to App Configuration data. | 516239f1-63e1-4d78-a4de-a74fb236a071 | > | [Azure Relay Listener](#azure-relay-listener) | Allows for listen access to Azure Relay resources. | 26e0b698-aa6d-4085-9386-aadae190014d |
Let's you manage the OS of your resource via Windows Admin Center as an administ
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/write | Creates a security rule or updates an existing security rule | > | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/write | Create or update the endpoint to the target resource. | > | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/read | Get or list of endpoints to the target resource. |
-> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listManagedProxyDetails/action | |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listManagedProxyDetails/action | List the managed proxy details to the resource. |
> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/read | Get the properties of a virtual machine | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/read | Retrieves the summary of the latest patch assessment operation | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/softwarePatches/read | Retrieves list of patches assessed during the last patch assessment operation |
Lets you manage backup service, but can't create vaults and give access to other
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/findRestorableTimeRanges/action | Finds Restorable Time Ranges |
-> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/write | Create BackupVault operation creates an Azure resource of type 'Backup Vault' |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/write | Update BackupVault operation updates an Azure resource of type 'Backup Vault' |
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Resource Group | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/operationResults/read | Gets Operation Result of a Patch Operation for a Backup Vault | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/operationStatus/read | Returns Backup Operation Status for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/checkNameAvailability/action | Checks if the requested BackupVault Name is Available |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/checkFeatureSupport/action | Validates if a feature is supported |
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Resource Group | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Resource Group | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/validateForBackup/action | Validates for backup of Backup Instance |
-> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | Operation returns the list of Operations for a Resource Provider |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage backup service, but can't create vaults and give access to other
"Microsoft.DataProtection/backupVaults/operationResults/read", "Microsoft.DataProtection/backupVaults/operationStatus/read", "Microsoft.DataProtection/locations/checkNameAvailability/action",
+ "Microsoft.DataProtection/locations/checkFeatureSupport/action",
"Microsoft.DataProtection/backupVaults/read", "Microsoft.DataProtection/backupVaults/read", "Microsoft.DataProtection/locations/operationStatus/read",
Lets you manage backup services, except removal of backup, vault creation and gi
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Resource Group | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. |
-> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | Operation returns the list of Operations for a Resource Provider |
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/validateForBackup/action | Validates for backup of Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance |
Can view backup services, but can't make changes [Learn more](../backup/backup-r
> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. | > | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/validateForBackup/action | Validates for backup of Backup Instance |
-> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/operations/read | Operation returns the list of Operations for a Resource Provider |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage the security-related policies of SQL servers and databases, but
> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/securityAlertPolicies/* | | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/databases/transparentDataEncryption/* | | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/vulnerabilityAssessments/* | |
+> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/serverConfigurationOptions/read | Gets properties for the specified Azure SQL Managed Instance Server Configuration Option. |
+> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/serverConfigurationOptions/write | Updates Azure SQL Managed Instance's Server Configuration Option properties for the specified instance. |
+> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/locations/serverConfigurationOptionAzureAsyncOperation/read | Gets the status of Azure SQL Managed Instance Server Configuration Option Azure async operation. |
> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/advancedThreatProtectionSettings/read | Retrieve a list of server Advanced Threat Protection settings configured for a given server | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/advancedThreatProtectionSettings/write | Change the server Advanced Threat Protection settings for a given server | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/auditingSettings/* | Create and manage SQL server auditing setting |
Lets you manage the security-related policies of SQL servers and databases, but
"Microsoft.Sql/managedInstances/securityAlertPolicies/*", "Microsoft.Sql/managedInstances/databases/transparentDataEncryption/*", "Microsoft.Sql/managedInstances/vulnerabilityAssessments/*",
+ "Microsoft.Sql/managedInstances/serverConfigurationOptions/read",
+ "Microsoft.Sql/managedInstances/serverConfigurationOptions/write",
+ "Microsoft.Sql/locations/serverConfigurationOptionAzureAsyncOperation/read",
"Microsoft.Sql/servers/advancedThreatProtectionSettings/read", "Microsoft.Sql/servers/advancedThreatProtectionSettings/write", "Microsoft.Sql/servers/auditingSettings/*",
Delete private data from a Log Analytics workspace. [Learn more](../azure-monito
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/components/*/read | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/components/purge/action | Purging data from Application Insights | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/*/read | View log analytics data |
-> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/purge/action | Delete specified data from workspace |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/purge/action | Delete specified data by query from workspace. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can perform all actions within an Azure Machine Learning workspace, except for c
> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/action | | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/delete | | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/*/write | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/featurestores/read | Gets the Machine Learning Services FeatureStore(s) |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/featurestores/checkNameAvailability/read | Checks the Machine Learning Services FeatureStore name availability |
> | **NotActions** | | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/delete | Deletes the Machine Learning Services Workspace(s) | > | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/write | Creates or updates a Machine Learning Services Workspace(s) |
Can perform all actions within an Azure Machine Learning workspace, except for c
"Microsoft.MachineLearningServices/workspaces/*/read", "Microsoft.MachineLearningServices/workspaces/*/action", "Microsoft.MachineLearningServices/workspaces/*/delete",
- "Microsoft.MachineLearningServices/workspaces/*/write"
+ "Microsoft.MachineLearningServices/workspaces/*/write",
+ "Microsoft.MachineLearningServices/featurestores/read",
+ "Microsoft.MachineLearningServices/featurestores/checkNameAvailability/read"
], "notActions": [ "Microsoft.MachineLearningServices/workspaces/delete",
Lets you perform detect, verify, identify, group, and find similar operations on
> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/identify/action | 1-to-many identification to find the closest matches of the specific query person face from a person group or large person group. | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/group/action | Divide candidate faces into groups based on face similarity. | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/findsimilars/action | Given query face's faceId, to search the similar-looking faces from a faceId array, a face list or a large face list. faceId |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/detectliveness/multimodal/action | <p>Performs liveness detection on a target face in a sequence of infrared, color and/or depth images, and returns the liveness classification of the target face as either &lsquo;real face&rsquo;, &lsquo;spoof face&rsquo;, or &lsquo;uncertain&rsquo; if a classification cannot be made with the given inputs.</p> |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/detectliveness/singlemodal/action | <p>Performs liveness detection on a target face in a sequence of images of the same modality (e.g. color or infrared), and returns the liveness classification of the target face as either &lsquo;real face&rsquo;, &lsquo;spoof face&rsquo;, or &lsquo;uncertain&rsquo; if a classification cannot be made with the given inputs.</p> |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/detectlivenesswithverify/singlemodal/action | Detects liveness of a target face in a sequence of images of the same stream type (e.g. color) and then compares with VerifyImage to return confidence score for identity scenarios. |
> | **NotDataActions** | | > | *none* | |
Lets you perform detect, verify, identify, group, and find similar operations on
"Microsoft.CognitiveServices/accounts/Face/verify/action", "Microsoft.CognitiveServices/accounts/Face/identify/action", "Microsoft.CognitiveServices/accounts/Face/group/action",
- "Microsoft.CognitiveServices/accounts/Face/findsimilars/action"
+ "Microsoft.CognitiveServices/accounts/Face/findsimilars/action",
+ "Microsoft.CognitiveServices/accounts/Face/detectliveness/multimodal/action",
+ "Microsoft.CognitiveServices/accounts/Face/detectliveness/singlemodal/action",
+ "Microsoft.CognitiveServices/accounts/Face/detectlivenesswithverify/singlemodal/action"
], "notDataActions": [] }
Read-only access to service and APIs [Learn more](../api-management/api-manageme
} ```
+### API Management Service Workspace API Developer
+
+Has read access to tags and products and write access to allow: assigning APIs to products, assigning tags to products and APIs. This role should be assigned on the service scope. [Learn more](../api-management/api-management-role-based-access-control.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/read | Lists a collection of tags defined within a service instance. or Gets the details of the tag specified by its identifier. |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/apiLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/operationLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/productLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/products/read | Lists a collection of products in the specified service instance. or Gets the details of the product specified by its identifier. |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/products/apiLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/read | Read metadata for an API Management Service instance |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Has read access to tags and products and write access to allow: assigning APIs to products, assigning tags to products and APIs. This role should be assigned on the service scope.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/9565a273-41b9-4368-97d2-aeb0c976a9b3",
+ "name": "9565a273-41b9-4368-97d2-aeb0c976a9b3",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.ApiManagement/service/tags/read",
+ "Microsoft.ApiManagement/service/tags/apiLinks/*",
+ "Microsoft.ApiManagement/service/tags/operationLinks/*",
+ "Microsoft.ApiManagement/service/tags/productLinks/*",
+ "Microsoft.ApiManagement/service/products/read",
+ "Microsoft.ApiManagement/service/products/apiLinks/*",
+ "Microsoft.ApiManagement/service/read",
+ "Microsoft.Authorization/*/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "API Management Service Workspace API Developer",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### API Management Service Workspace API Product Manager
+
+Has the same access as API Management Service Workspace API Developer as well as read access to users and write access to allow assigning users to groups. This role should be assigned on the service scope. [Learn more](../api-management/api-management-role-based-access-control.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/users/read | Lists a collection of registered users in the specified service instance. or Gets the details of the user specified by its identifier. |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/read | Lists a collection of tags defined within a service instance. or Gets the details of the tag specified by its identifier. |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/apiLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/operationLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/productLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/products/read | Lists a collection of products in the specified service instance. or Gets the details of the product specified by its identifier. |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/products/apiLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/groups/users/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/read | Read metadata for an API Management Service instance |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Has the same access as API Management Service Workspace API Developer as well as read access to users and write access to allow assigning users to groups. This role should be assigned on the service scope.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/d59a3e9c-6d52-4a5a-aeed-6bf3cf0e31da",
+ "name": "d59a3e9c-6d52-4a5a-aeed-6bf3cf0e31da",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.ApiManagement/service/users/read",
+ "Microsoft.ApiManagement/service/tags/read",
+ "Microsoft.ApiManagement/service/tags/apiLinks/*",
+ "Microsoft.ApiManagement/service/tags/operationLinks/*",
+ "Microsoft.ApiManagement/service/tags/productLinks/*",
+ "Microsoft.ApiManagement/service/products/read",
+ "Microsoft.ApiManagement/service/products/apiLinks/*",
+ "Microsoft.ApiManagement/service/groups/users/*",
+ "Microsoft.ApiManagement/service/read",
+ "Microsoft.Authorization/*/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "API Management Service Workspace API Product Manager",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### API Management Workspace API Developer
+
+Has read access to entities in the workspace and read and write access to entities for editing APIs. This role should be assigned on the workspace scope. [Learn more](../api-management/api-management-role-based-access-control.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/*/read | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/apis/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/apiVersionSets/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/policies/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/schemas/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/products/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/policyFragments/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/namedValues/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/tags/* | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Has read access to entities in the workspace and read and write access to entities for editing APIs. This role should be assigned on the workspace scope.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/56328988-075d-4c6a-8766-d93edd6725b6",
+ "name": "56328988-075d-4c6a-8766-d93edd6725b6",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.ApiManagement/service/workspaces/*/read",
+ "Microsoft.ApiManagement/service/workspaces/apis/*",
+ "Microsoft.ApiManagement/service/workspaces/apiVersionSets/*",
+ "Microsoft.ApiManagement/service/workspaces/policies/*",
+ "Microsoft.ApiManagement/service/workspaces/schemas/*",
+ "Microsoft.ApiManagement/service/workspaces/products/*",
+ "Microsoft.ApiManagement/service/workspaces/policyFragments/*",
+ "Microsoft.ApiManagement/service/workspaces/namedValues/*",
+ "Microsoft.ApiManagement/service/workspaces/tags/*",
+ "Microsoft.Authorization/*/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "API Management Workspace API Developer",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### API Management Workspace API Product Manager
+
+Has read access to entities in the workspace and read and write access to entities for publishing APIs. This role should be assigned on the workspace scope. [Learn more](../api-management/api-management-role-based-access-control.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/*/read | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/products/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/subscriptions/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/groups/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/tags/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/notifications/* | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Has read access to entities in the workspace and read and write access to entities for publishing APIs. This role should be assigned on the workspace scope.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/73c2c328-d004-4c5e-938c-35c6f5679a1f",
+ "name": "73c2c328-d004-4c5e-938c-35c6f5679a1f",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.ApiManagement/service/workspaces/*/read",
+ "Microsoft.ApiManagement/service/workspaces/products/*",
+ "Microsoft.ApiManagement/service/workspaces/subscriptions/*",
+ "Microsoft.ApiManagement/service/workspaces/groups/*",
+ "Microsoft.ApiManagement/service/workspaces/tags/*",
+ "Microsoft.ApiManagement/service/workspaces/notifications/*",
+ "Microsoft.Authorization/*/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "API Management Workspace API Product Manager",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### API Management Workspace Contributor
+
+Can manage the workspace and view, but not modify its members. This role should be assigned on the workspace scope. [Learn more](../api-management/api-management-role-based-access-control.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/* | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can manage the workspace and view, but not modify its members. This role should be assigned on the workspace scope.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/0c34c906-8d99-4cb7-8bb7-33f5b0a1a799",
+ "name": "0c34c906-8d99-4cb7-8bb7-33f5b0a1a799",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.ApiManagement/service/workspaces/*",
+ "Microsoft.Authorization/*/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "API Management Workspace Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### API Management Workspace Reader
+
+Has read-only access to entities in the workspace. This role should be assigned on the workspace scope. [Learn more](../api-management/api-management-role-based-access-control.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/workspaces/*/read | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Has read-only access to entities in the workspace. This role should be assigned on the workspace scope.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/ef1c2c96-4a77-49e8-b9a4-6179fe1d2fd2",
+ "name": "ef1c2c96-4a77-49e8-b9a4-6179fe1d2fd2",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.ApiManagement/service/workspaces/*/read",
+ "Microsoft.Authorization/*/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "API Management Workspace Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### App Configuration Data Owner Allows full access to App Configuration data. [Learn more](../azure-app-configuration/concept-enable-rbac.md)
Microsoft Sentinel Contributor [Learn more](../sentinel/roles.md)
> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get exiting OMS solution | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | |
-> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get datasources under a workspace. |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get data source under a workspace. |
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/querypacks/*/read | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/workbooks/* | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/myworkbooks/read | Read a private Workbook |
Microsoft Sentinel Reader [Learn more](../sentinel/roles.md)
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/analytics/query/action | Search using new engine. | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/*/read | View log analytics data | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/LinkedServices/read | Get linked services under given workspace. |
-> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/savedSearches/read | Gets a saved search query |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/savedSearches/read | Gets a saved search query. |
> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get exiting OMS solution | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/querypacks/*/read | |
-> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get datasources under a workspace. |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get data source under a workspace. |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/workbooks/read | Read a workbook | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/myworkbooks/read | Read a private Workbook | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
Microsoft Sentinel Responder [Learn more](../sentinel/roles.md)
> | [Microsoft.SecurityInsights](resource-provider-operations.md#microsoftsecurityinsights)/threatIntelligence/queryIndicators/action | Query Threat Intelligence Indicators | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/analytics/query/action | Search using new engine. | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/*/read | View log analytics data |
-> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get datasources under a workspace. |
-> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/savedSearches/read | Gets a saved search query |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get data source under a workspace. |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/savedSearches/read | Gets a saved search query. |
> | [Microsoft.OperationsManagement](resource-provider-operations.md#microsoftoperationsmanagement)/solutions/read | Get exiting OMS solution | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/read | Run queries over the data in the workspace | > | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/query/*/read | |
-> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get datasources under a workspace. |
+> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/workspaces/dataSources/read | Get data source under a workspace. |
> | [Microsoft.OperationalInsights](resource-provider-operations.md#microsoftoperationalinsights)/querypacks/*/read | | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/workbooks/read | Read a workbook | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/myworkbooks/read | Read a private Workbook |
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 02/24/2023 Last updated : 03/31/2023
Click the resource provider name in the following table to see the list of opera
| [Microsoft.Cdn](#microsoftcdn) | | [Microsoft.ClassicNetwork](#microsoftclassicnetwork) | | [Microsoft.HybridConnectivity](#microsofthybridconnectivity) |
+| [Microsoft.MobileNetwork](#microsoftmobilenetwork) |
| [Microsoft.Network](#microsoftnetwork) | | **Storage** | | [Microsoft.ClassicStorage](#microsoftclassicstorage) | | [Microsoft.DataBox](#microsoftdatabox) | | [Microsoft.DataShare](#microsoftdatashare) | | [Microsoft.ElasticSan](#microsoftelasticsan) |
-| [Microsoft.ImportExport](#microsoftimportexport) |
| [Microsoft.NetApp](#microsoftnetapp) | | [Microsoft.Storage](#microsoftstorage) | | [Microsoft.StorageCache](#microsoftstoragecache) |
Click the resource provider name in the following table to see the list of opera
| [Microsoft.Dashboard](#microsoftdashboard) | | [Microsoft.DigitalTwins](#microsoftdigitaltwins) | | [Microsoft.LoadTestService](#microsoftloadtestservice) |
-| [Microsoft.MobileNetwork](#microsoftmobilenetwork) |
| [Microsoft.ServicesHub](#microsoftserviceshub) |
Azure service: [Azure Container Apps](../container-apps/index.yml)
> | microsoft.app/containerapps/sourcecontrols/read | Get Container App Source Control Configuration | > | microsoft.app/containerapps/sourcecontrols/delete | Delete Container App Source Control Configuration | > | microsoft.app/containerapps/sourcecontrols/operationresults/read | Get Container App Source Control Long Running Operation Result |
-> | microsoft.app/jobs/write | Create or update a Container Apps Job |
-> | microsoft.app/jobs/delete | Delete a Container Apps Job |
-> | microsoft.app/jobs/read | Get a Container Apps Job |
-> | microsoft.app/jobs/listsecrets/action | List secrets of a container apps job |
> | microsoft.app/locations/availablemanagedenvironmentsworkloadprofiletypes/read | Get Available Workload Profile Types in a Region | > | microsoft.app/locations/billingmeters/read | Get Billing Meters in a Region | > | microsoft.app/locations/containerappoperationresults/read | Get a Container App Long Running Operation Result | > | microsoft.app/locations/containerappoperationstatuses/read | Get a Container App Long Running Operation Status |
-> | microsoft.app/locations/containerappsjoboperationresults/read | Get a Container Apps Job Long Running Operation Result |
-> | microsoft.app/locations/containerappsjoboperationstatuses/read | Get a Container Apps Job Long Running Operation Status |
> | microsoft.app/locations/managedenvironmentoperationresults/read | Get a Managed Environment Long Running Operation Result | > | microsoft.app/locations/managedenvironmentoperationstatuses/read | Get a Managed Environment Long Running Operation Status | > | microsoft.app/managedenvironments/join/action | Allows to create a Container App in a Managed Environment |
Azure service: Microsoft.HybridConnectivity
> | Microsoft.HybridConnectivity/endpoints/write | Create or update the endpoint to the target resource. | > | Microsoft.HybridConnectivity/endpoints/delete | Deletes the endpoint access to the target resource. | > | Microsoft.HybridConnectivity/endpoints/listCredentials/action | List the endpoint access credentials to the resource. |
+> | Microsoft.HybridConnectivity/endpoints/listIngressGatewayCredentials/action | List the ingress gateway access credentials to the resource. |
+> | Microsoft.HybridConnectivity/endpoints/listManagedProxyDetails/action | List the managed proxy details to the resource. |
+> | Microsoft.HybridConnectivity/endpoints/serviceConfigurations/read | Get or list of serviceConfigurations to the target resource. |
+> | Microsoft.HybridConnectivity/endpoints/serviceConfigurations/write | Create or update the serviceConfigurations to the target resource. |
+> | Microsoft.HybridConnectivity/endpoints/serviceConfigurations/delete | Deletes the serviceConfigurations access to the target resource. |
> | Microsoft.HybridConnectivity/Locations/OperationStatuses/read | read OperationStatuses | > | Microsoft.HybridConnectivity/operations/read | Get the list of Operations |
+### Microsoft.MobileNetwork
+
+Azure service: [Mobile networks](../private-5g-core/index.yml)
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.MobileNetwork/register/action | Register the subscription for Microsoft.MobileNetwork |
+> | Microsoft.MobileNetwork/unregister/action | Unregister the subscription for Microsoft.MobileNetwork |
+> | Microsoft.MobileNetwork/Locations/OperationStatuses/read | read OperationStatuses |
+> | Microsoft.MobileNetwork/Locations/OperationStatuses/write | write OperationStatuses |
+> | Microsoft.MobileNetwork/mobileNetworks/read | Gets information about the specified mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/write | Creates or updates a mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/delete | Deletes the specified mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/write | Updates mobile network tags. |
+> | Microsoft.MobileNetwork/mobileNetworks/read | Lists all the mobile networks in a subscription. |
+> | Microsoft.MobileNetwork/mobileNetworks/read | Lists all the mobile networks in a resource group. |
+> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/read | Gets information about the specified data network. |
+> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/write | Creates or updates a data network. Must be created in the same location as its parent mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/delete | Deletes the specified data network. |
+> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/write | Updates data network tags. |
+> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/read | Lists all data networks in the mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/services/read | Gets information about the specified service. |
+> | Microsoft.MobileNetwork/mobileNetworks/services/write | Creates or updates a service. Must be created in the same location as its parent mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/services/delete | Deletes the specified service. |
+> | Microsoft.MobileNetwork/mobileNetworks/services/write | Updates service tags. |
+> | Microsoft.MobileNetwork/mobileNetworks/services/read | Gets all the services in a mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/read | Gets information about the specified SIM policy. |
+> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/write | Creates or updates a SIM policy. Must be created in the same location as its parent mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/delete | Deletes the specified SIM policy. |
+> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/write | Updates SIM policy tags. |
+> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/read | Gets all the SIM policies in a mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/sites/read | Gets information about the specified mobile network site. |
+> | Microsoft.MobileNetwork/mobileNetworks/sites/write | Creates or updates a mobile network site. Must be created in the same location as its parent mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/sites/delete | Deletes the specified mobile network site. This will also delete any network functions that are a part of this site. |
+> | Microsoft.MobileNetwork/mobileNetworks/sites/write | Updates site tags. |
+> | Microsoft.MobileNetwork/mobileNetworks/sites/read | Lists all sites in the mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/slices/read | Gets information about the specified network slice. |
+> | Microsoft.MobileNetwork/mobileNetworks/slices/write | Creates or updates a network slice. Must be created in the same location as its parent mobile network. |
+> | Microsoft.MobileNetwork/mobileNetworks/slices/delete | Deletes the specified network slice. |
+> | Microsoft.MobileNetwork/mobileNetworks/slices/write | Updates slice tags. |
+> | Microsoft.MobileNetwork/mobileNetworks/slices/read | Lists all slices in the mobile network. |
+> | Microsoft.MobileNetwork/Operations/read | read Operations |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/read | Gets information about the specified packet core control plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/write | Creates or updates a packet core control plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/delete | Deletes the specified packet core control plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/write | Updates packet core control planes tags. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/read | Lists all the packet core control planes in a subscription. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/read | Lists all the packet core control planes in a resource group. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/rollback/action | Roll back the specified packet core control plane to the previous version, "rollbackVersion". Multiple consecutive rollbacks are not possible. This action may cause a service outage. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/reinstall/action | Reinstall the specified packet core control plane. This action will remove any transaction state from the packet core to return it to a known state. This action will cause a service outage. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/collectDiagnosticsPackage/action | Collect a diagnostics package for the specified packet core control plane. This action will upload the diagnostics to a storage account. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/read | Gets information about the specified packet core data plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/write | Creates or updates a packet core data plane. Must be created in the same location as its parent packet core control plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/delete | Deletes the specified packet core data plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/write | Updates packet core data planes tags. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/read | Lists all the packet core data planes associated with a packet core control plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/read | Gets information about the specified attached data network. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/write | Creates or updates an attached data network. Must be created in the same location as its parent packet core data plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/delete | Deletes the specified attached data network. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/write | Updates an attached data network tags. |
+> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/read | Gets all the attached data networks associated with a packet core data plane. |
+> | Microsoft.MobileNetwork/packetCoreControlPlaneVersions/read | Gets information about the specified packet core control plane version. |
+> | Microsoft.MobileNetwork/packetCoreControlPlaneVersions/read | Lists all supported packet core control planes versions. |
+> | Microsoft.MobileNetwork/radioAccessNetworks/read | Gets information about the specified RAN. |
+> | Microsoft.MobileNetwork/radioAccessNetworks/write | Creates or updates a RAN. |
+> | Microsoft.MobileNetwork/radioAccessNetworks/delete | Deletes the specified RAN. |
+> | Microsoft.MobileNetwork/radioAccessNetworks/write | Updates RAN tags. |
+> | Microsoft.MobileNetwork/radioAccessNetworks/read | Gets all the RANs in a subscription. |
+> | Microsoft.MobileNetwork/radioAccessNetworks/read | Gets all the RANs in a resource group. |
+> | Microsoft.MobileNetwork/simGroups/uploadSims/action | Bulk upload SIMs to a SIM group. |
+> | Microsoft.MobileNetwork/simGroups/deleteSims/action | Bulk delete SIMs from a SIM group. |
+> | Microsoft.MobileNetwork/simGroups/uploadEncryptedSims/action | Bulk upload SIMs in encrypted form to a SIM group. The SIM credentials must be encrypted. |
+> | Microsoft.MobileNetwork/simGroups/read | Gets information about the specified SIM group. |
+> | Microsoft.MobileNetwork/simGroups/write | Creates or updates a SIM group. |
+> | Microsoft.MobileNetwork/simGroups/delete | Deletes the specified SIM group. |
+> | Microsoft.MobileNetwork/simGroups/write | Updates SIM group tags. |
+> | Microsoft.MobileNetwork/simGroups/read | Gets all the SIM groups in a subscription. |
+> | Microsoft.MobileNetwork/simGroups/read | Gets all the SIM groups in a resource group. |
+> | Microsoft.MobileNetwork/simGroups/sims/read | Gets information about the specified SIM. |
+> | Microsoft.MobileNetwork/simGroups/sims/write | Creates or updates a SIM. |
+> | Microsoft.MobileNetwork/simGroups/sims/delete | Deletes the specified SIM. |
+> | Microsoft.MobileNetwork/simGroups/sims/read | Gets all the SIMs in a SIM group. |
+> | Microsoft.MobileNetwork/sims/read | Gets information about the specified SIM. |
+> | Microsoft.MobileNetwork/sims/write | Creates or updates a SIM. |
+> | Microsoft.MobileNetwork/sims/delete | Deletes the specified SIM. |
+> | Microsoft.MobileNetwork/sims/write | Updates SIM tags. |
+> | Microsoft.MobileNetwork/sims/read | Gets all the SIMs in a subscription. |
+> | Microsoft.MobileNetwork/sims/read | Gets all the SIMs in a resource group. |
+ ### Microsoft.Network Azure service: [Application Gateway](../application-gateway/index.yml), [Azure Bastion](../bastion/index.yml), [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md), [Azure DNS](../dns/index.yml), [Azure ExpressRoute](../expressroute/index.yml), [Azure Firewall](../firewall/index.yml), [Azure Front Door Service](../frontdoor/index.yml), [Azure Private Link](../private-link/index.yml), [Load Balancer](../load-balancer/index.yml), [Network Watcher](../network-watcher/index.yml), [Traffic Manager](../traffic-manager/index.yml), [Virtual Network](../virtual-network/index.yml), [Virtual WAN](../virtual-wan/index.yml), [VPN Gateway](../vpn-gateway/index.yml)
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/locations/setLoadBalancerFrontendPublicIpAddresses/action | SetLoadBalancerFrontendPublicIpAddresses targets frontend IP configurations of 2 load balancers. Azure Resource Manager IDs of the IP configurations are provided in the body of the request. | > | Microsoft.Network/locations/queryNetworkSecurityPerimeter/action | Queries Network Security Perimeter by the perimeter GUID | > | Microsoft.Network/locations/applicationGatewayWafDynamicManifests/read | Get the application gateway waf dynamic manifest |
-> | Microsoft.Network/locations/applicationgatewaywafdynamicmanifests/default/read | Get Application Gateway Waf Dynamic Manifest Default entry |
+> | Microsoft.Network/locations/applicationGatewayWafDynamicManifests/default/read | Get Application Gateway Waf Dynamic Manifest Default entry |
> | Microsoft.Network/locations/autoApprovedPrivateLinkServices/read | Gets Auto Approved Private Link Services | > | Microsoft.Network/locations/availableDelegations/read | Gets Available Delegations | > | Microsoft.Network/locations/availablePrivateEndpointTypes/read | Gets available Private Endpoint resources |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/networkExperimentProfiles/experiments/timeseries/action | Get an Internet Analyzer test's time series | > | Microsoft.Network/networkExperimentProfiles/experiments/latencyScorecard/action | Get an Internet Analyzer test's latency scorecard | > | Microsoft.Network/networkExperimentProfiles/preconfiguredEndpoints/read | Get an Internet Analyzer profile's pre-configured endpoints |
+> | Microsoft.Network/networkGroupMemberships/read | List Network Group Memberships |
> | Microsoft.Network/networkIntentPolicies/read | Gets an Network Intent Policy Description | > | Microsoft.Network/networkIntentPolicies/write | Creates an Network Intent Policy or updates an existing Network Intent Policy | > | Microsoft.Network/networkIntentPolicies/delete | Deletes an Network Intent Policy |
Azure service: [Azure Elastic SAN](../storage/elastic-san/index.yml)
> | Microsoft.ElasticSan/operations/read | List the operations supported by Microsoft.ElasticSan | > | Microsoft.ElasticSan/skus/read | Get Sku |
-### Microsoft.ImportExport
-
-Azure service: [Azure Import/Export](../import-export/storage-import-export-service.md)
-
-> [!div class="mx-tableFixed"]
-> | Action | Description |
-> | | |
-> | Microsoft.ImportExport/register/action | Registers the subscription for the import/export resource provider and enables the creation of import/export jobs. |
-> | Microsoft.ImportExport/jobs/write | Creates a job with the specified parameters or update the properties or tags for the specified job. |
-> | Microsoft.ImportExport/jobs/read | Gets the properties for the specified job or returns the list of jobs. |
-> | Microsoft.ImportExport/jobs/listBitLockerKeys/action | Gets the BitLocker keys for the specified job. |
-> | Microsoft.ImportExport/jobs/delete | Deletes an existing job. |
-> | Microsoft.ImportExport/locations/read | Gets the properties for the specified location or returns the list of locations. |
-> | Microsoft.ImportExport/operations/read | Gets the operations supported by the Resource Provider. |
- ### Microsoft.NetApp Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
Azure service: [StorSimple](../storsimple/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
-> | Microsoft.StorSimple/register/action | Register Provider Microsoft.StorSimple |
-> | Microsoft.StorSimple/managers/clearAlerts/action | Clear all the alerts associated with the device manager. |
-> | Microsoft.StorSimple/managers/getEncryptionKey/action | Get encryption key for the device manager. |
-> | Microsoft.StorSimple/managers/read | Lists or gets the Device Managers |
-> | Microsoft.StorSimple/managers/delete | Deletes the Device Managers |
-> | Microsoft.StorSimple/managers/write | Create or update the Device Managers |
-> | Microsoft.StorSimple/managers/configureDevice/action | Configures a device |
-> | Microsoft.StorSimple/managers/migrateClassicToResourceManager/action | Migrate from Classic to Resource Manager |
-> | Microsoft.StorSimple/managers/listActivationKey/action | Gets the activation key of the StorSimple Device Manager. |
-> | Microsoft.StorSimple/managers/regenerateActivationKey/action | Regenerate the Activation key for an existing StorSimple Device Manager. |
-> | Microsoft.StorSimple/managers/listPublicEncryptionKey/action | List public encryption keys of a StorSimple Device Manager. |
-> | Microsoft.StorSimple/managers/provisionCloudAppliance/action | Create a new cloud appliance. |
> | Microsoft.StorSimple/Managers/write | Create Vault operation creates an Azure resource of type 'vault' | > | Microsoft.StorSimple/Managers/read | The Get Vault operation gets an object representing the Azure resource of type 'vault' | > | Microsoft.StorSimple/Managers/delete | The Delete Vault operation deletes the specified Azure resource of type 'vault' |
-> | Microsoft.StorSimple/managers/accessControlRecords/read | Lists or gets the Access Control Records |
-> | Microsoft.StorSimple/managers/accessControlRecords/write | Create or update the Access Control Records |
-> | Microsoft.StorSimple/managers/accessControlRecords/delete | Deletes the Access Control Records |
-> | Microsoft.StorSimple/managers/accessControlRecords/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/alerts/read | Lists or gets the Alerts |
-> | Microsoft.StorSimple/managers/backups/read | Lists or gets the Backup Set |
-> | Microsoft.StorSimple/managers/bandwidthSettings/read | List the Bandwidth Settings (8000 Series Only) |
-> | Microsoft.StorSimple/managers/bandwidthSettings/write | Creates a new or updates Bandwidth Settings (8000 Series Only) |
-> | Microsoft.StorSimple/managers/bandwidthSettings/delete | Deletes an existing Bandwidth Settings (8000 Series Only) |
-> | Microsoft.StorSimple/managers/bandwidthSettings/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/certificates/write | Create or update the Certificates |
> | Microsoft.StorSimple/Managers/certificates/write | The Update Resource Certificate operation updates the resource/vault credential certificate. |
-> | Microsoft.StorSimple/managers/cloudApplianceConfigurations/read | List the Cloud Appliance Supported Configurations |
-> | Microsoft.StorSimple/managers/devices/sendTestAlertEmail/action | Send test alert email to configured email recipients. |
-> | Microsoft.StorSimple/managers/devices/scanForUpdates/action | Scan for updates in a device. |
-> | Microsoft.StorSimple/managers/devices/download/action | Download updates for a device. |
-> | Microsoft.StorSimple/managers/devices/install/action | Install updates on a device. |
-> | Microsoft.StorSimple/managers/devices/read | Lists or gets the Devices |
-> | Microsoft.StorSimple/managers/devices/write | Create or update the Devices |
-> | Microsoft.StorSimple/managers/devices/delete | Deletes the Devices |
-> | Microsoft.StorSimple/managers/devices/deactivate/action | Deactivates a device. |
-> | Microsoft.StorSimple/managers/devices/failover/action | Failover of the device. |
-> | Microsoft.StorSimple/managers/devices/publishSupportPackage/action | Publish the support package for an existing device. A StorSimple support package is an easy-to-use mechanism that collects all relevant logs to assist Microsoft Support with troubleshooting any StorSimple device issues. |
-> | Microsoft.StorSimple/managers/devices/authorizeForServiceEncryptionKeyRollover/action | Authorize for Service Encryption Key Rollover of Devices |
-> | Microsoft.StorSimple/managers/devices/installUpdates/action | Installs updates on the devices (8000 Series Only). |
-> | Microsoft.StorSimple/managers/devices/listFailoverSets/action | List the failover sets for an existing device (8000 Series Only). |
-> | Microsoft.StorSimple/managers/devices/listFailoverTargets/action | List failover targets of the devices (8000 Series Only). |
-> | Microsoft.StorSimple/managers/devices/publicEncryptionKey/action | List public encryption key of the device manager |
-> | Microsoft.StorSimple/managers/devices/alertSettings/read | Lists or gets the Alert Settings |
-> | Microsoft.StorSimple/managers/devices/alertSettings/write | Create or update the Alert Settings |
-> | Microsoft.StorSimple/managers/devices/alertSettings/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/write | Creates a new or updates Backup Polices (8000 Series Only) |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/read | List the Backup Polices (8000 Series Only) |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/delete | Deletes an existing Backup Polices (8000 Series Only) |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/backup/action | Take a manual backup to create an on-demand backup of all the volumes protected by the policy. |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/schedules/write | Creates a new or updates Schedules |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/schedules/read | List the Schedules |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/schedules/delete | Deletes an existing Schedules |
-> | Microsoft.StorSimple/managers/devices/backupPolicies/schedules/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/backups/read | Lists or gets the Backup Set |
-> | Microsoft.StorSimple/managers/devices/backups/delete | Deletes the Backup Set |
-> | Microsoft.StorSimple/managers/devices/backups/restore/action | Restore all the volumes from a backup set. |
-> | Microsoft.StorSimple/managers/devices/backups/elements/clone/action | Clone a share or volume using a backup element. |
-> | Microsoft.StorSimple/managers/devices/backups/elements/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/backups/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/backupScheduleGroups/read | Lists or gets the Backup Schedule Groups |
-> | Microsoft.StorSimple/managers/devices/backupScheduleGroups/write | Create or update the Backup Schedule Groups |
-> | Microsoft.StorSimple/managers/devices/backupScheduleGroups/delete | Deletes the Backup Schedule Groups |
-> | Microsoft.StorSimple/managers/devices/backupScheduleGroups/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/chapSettings/write | Create or update the Chap Settings |
-> | Microsoft.StorSimple/managers/devices/chapSettings/read | Lists or gets the Chap Settings |
-> | Microsoft.StorSimple/managers/devices/chapSettings/delete | Deletes the Chap Settings |
-> | Microsoft.StorSimple/managers/devices/chapSettings/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/disks/read | Lists or gets the Disks |
-> | Microsoft.StorSimple/managers/devices/failover/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/failoverTargets/read | Lists or gets the Failover targets of the devices |
-> | Microsoft.StorSimple/managers/devices/fileservers/read | Lists or gets the File Servers |
-> | Microsoft.StorSimple/managers/devices/fileservers/write | Create or update the File Servers |
-> | Microsoft.StorSimple/managers/devices/fileservers/delete | Deletes the File Servers |
-> | Microsoft.StorSimple/managers/devices/fileservers/backup/action | Take backup of an File Server. |
-> | Microsoft.StorSimple/managers/devices/fileservers/metrics/read | Lists or gets the Metrics |
-> | Microsoft.StorSimple/managers/devices/fileservers/metricsDefinitions/read | Lists or gets the Metrics Definitions |
-> | Microsoft.StorSimple/managers/devices/fileservers/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/fileservers/shares/write | Create or update the Shares |
-> | Microsoft.StorSimple/managers/devices/fileservers/shares/read | Lists or gets the Shares |
-> | Microsoft.StorSimple/managers/devices/fileservers/shares/delete | Deletes the Shares |
-> | Microsoft.StorSimple/managers/devices/fileservers/shares/metrics/read | Lists or gets the Metrics |
-> | Microsoft.StorSimple/managers/devices/fileservers/shares/metricsDefinitions/read | Lists or gets the Metrics Definitions |
-> | Microsoft.StorSimple/managers/devices/fileservers/shares/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/hardwareComponentGroups/read | List the Hardware Component Groups |
-> | Microsoft.StorSimple/managers/devices/hardwareComponentGroups/changeControllerPowerState/action | Change controller power state of hardware component groups |
-> | Microsoft.StorSimple/managers/devices/hardwareComponentGroups/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/read | Lists or gets the iSCSI Servers |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/write | Create or update the iSCSI Servers |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/delete | Deletes the iSCSI Servers |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/backup/action | Take backup of an iSCSI server. |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/disks/read | Lists or gets the Disks |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/disks/write | Create or update the Disks |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/disks/delete | Deletes the Disks |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/disks/metrics/read | Lists or gets the Metrics |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/disks/metricsDefinitions/read | Lists or gets the Metrics Definitions |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/disks/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/metrics/read | Lists or gets the Metrics |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/metricsDefinitions/read | Lists or gets the Metrics Definitions |
-> | Microsoft.StorSimple/managers/devices/iscsiservers/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/jobs/read | Lists or gets the Jobs |
-> | Microsoft.StorSimple/managers/devices/jobs/cancel/action | Cancel a running job |
-> | Microsoft.StorSimple/managers/devices/jobs/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/metrics/read | Lists or gets the Metrics |
-> | Microsoft.StorSimple/managers/devices/metricsDefinitions/read | Lists or gets the Metrics Definitions |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/import/action | Import source configurations for migration |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/startMigrationEstimate/action | Start a job to estimate the duration of the migration process. |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/startMigration/action | Start migration using source configurations |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/confirmMigration/action | Confirms a successful migration and commit it. |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/fetchMigrationEstimate/action | Fetch the status for the migration estimation job. |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/fetchMigrationStatus/action | Fetch the status for the migration. |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/fetchConfirmMigrationStatus/action | Fetch the confirm status of migration. |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/confirmMigrationStatus/read | List the Confirm Migration Status |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/migrationEstimate/read | List the Migration Estimate |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/migrationStatus/read | List the Migration Status |
-> | Microsoft.StorSimple/managers/devices/migrationSourceConfigurations/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/networkSettings/read | Lists or gets the Network Settings |
-> | Microsoft.StorSimple/managers/devices/networkSettings/write | Creates a new or updates Network Settings |
-> | Microsoft.StorSimple/managers/devices/networkSettings/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/securitySettings/update/action | Update the security settings. |
-> | Microsoft.StorSimple/managers/devices/securitySettings/read | List the Security Settings |
-> | Microsoft.StorSimple/managers/devices/securitySettings/syncRemoteManagementCertificate/action | Synchronize the remote management certificate for a device. |
-> | Microsoft.StorSimple/managers/devices/securitySettings/write | Creates a new or updates Security Settings |
-> | Microsoft.StorSimple/managers/devices/securitySettings/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/shares/read | Lists or gets the Shares |
-> | Microsoft.StorSimple/managers/devices/timeSettings/read | Lists or gets the Time Settings |
-> | Microsoft.StorSimple/managers/devices/timeSettings/write | Creates a new or updates Time Settings |
-> | Microsoft.StorSimple/managers/devices/timeSettings/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/updates/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/devices/updateSummary/read | Lists or gets the Update Summary |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/write | Creates a new or updates Volume Containers (8000 Series Only) |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/read | List the Volume Containers (8000 Series Only) |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/delete | Deletes an existing Volume Containers (8000 Series Only) |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/metrics/read | List the Metrics |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/metricsDefinitions/read | List the Metrics Definitions |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/volumes/read | List the Volumes |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/volumes/write | Creates a new or updates Volumes |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/volumes/delete | Deletes an existing Volumes |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/volumes/metrics/read | List the Metrics |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/volumes/metricsDefinitions/read | List the Metrics Definitions |
-> | Microsoft.StorSimple/managers/devices/volumeContainers/volumes/operationResults/read | List the Operation Results |
-> | Microsoft.StorSimple/managers/devices/volumes/read | List the Volumes |
-> | Microsoft.StorSimple/managers/encryptionSettings/read | Lists or gets the Encryption Settings |
-> | Microsoft.StorSimple/managers/extendedInformation/read | Lists or gets the Extended Vault Information |
-> | Microsoft.StorSimple/managers/extendedInformation/write | Create or update the Extended Vault Information |
-> | Microsoft.StorSimple/managers/extendedInformation/delete | Deletes the Extended Vault Information |
> | Microsoft.StorSimple/Managers/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.StorSimple/Managers/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.StorSimple/Managers/extendedInformation/delete | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
-> | Microsoft.StorSimple/managers/features/read | List the Features |
-> | Microsoft.StorSimple/managers/fileservers/read | Lists or gets the File Servers |
-> | Microsoft.StorSimple/managers/iscsiservers/read | Lists or gets the iSCSI Servers |
-> | Microsoft.StorSimple/managers/jobs/read | Lists or gets the Jobs |
-> | Microsoft.StorSimple/managers/metrics/read | Lists or gets the Metrics |
-> | Microsoft.StorSimple/managers/metricsDefinitions/read | Lists or gets the Metrics Definitions |
-> | Microsoft.StorSimple/managers/migrationSourceConfigurations/read | List the Migration Source Configurations (8000 Series Only) |
-> | Microsoft.StorSimple/managers/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/storageAccountCredentials/write | Create or update the Storage Account Credentials |
-> | Microsoft.StorSimple/managers/storageAccountCredentials/read | Lists or gets the Storage Account Credentials |
-> | Microsoft.StorSimple/managers/storageAccountCredentials/delete | Deletes the Storage Account Credentials |
-> | Microsoft.StorSimple/managers/storageAccountCredentials/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/managers/storageDomains/read | Lists or gets the Storage Domains |
-> | Microsoft.StorSimple/managers/storageDomains/write | Create or update the Storage Domains |
-> | Microsoft.StorSimple/managers/storageDomains/delete | Deletes the Storage Domains |
-> | Microsoft.StorSimple/managers/storageDomains/operationResults/read | Lists or gets the Operation Results |
-> | Microsoft.StorSimple/operations/read | Lists or gets the Operations |
## Web
Azure service: [Azure Spring Apps](../spring-apps/index.yml)
> | Microsoft.AppPlatform/locations/operationResults/Spring/read | Read resource operation result | > | Microsoft.AppPlatform/locations/operationStatus/operationId/read | Read resource operation status | > | Microsoft.AppPlatform/operations/read | List available operations of Microsoft Azure Spring Apps |
+> | Microsoft.AppPlatform/runtimeVersions/read | Get runtime versions of Microsoft Azure Spring Apps |
> | Microsoft.AppPlatform/skus/read | List available skus of Microsoft Azure Spring Apps | > | Microsoft.AppPlatform/Spring/write | Create or Update a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/delete | Delete a specific Azure Spring Apps service instance |
Azure service: [Azure Spring Apps](../spring-apps/index.yml)
> | Microsoft.AppPlatform/Spring/apps/deployments/connectorProps/read | Get the service connectors for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/connectorProps/write | Create or update the service connector for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/connectorProps/delete | Delete the service connector for a specific application |
+> | Microsoft.AppPlatform/Spring/apps/deployments/operationResults/read | Read resource operation result |
+> | Microsoft.AppPlatform/Spring/apps/deployments/operationStatuses/read | Read resource operation Status |
> | Microsoft.AppPlatform/Spring/apps/deployments/skus/read | List available skus of an application deployment | > | Microsoft.AppPlatform/Spring/apps/domains/write | Create or update the custom domain for a specific application | > | Microsoft.AppPlatform/Spring/apps/domains/delete | Delete the custom domain for a specific application | > | Microsoft.AppPlatform/Spring/apps/domains/read | Get the custom domains for a specific application |
+> | Microsoft.AppPlatform/Spring/apps/operationResults/read | Read resource operation result |
+> | Microsoft.AppPlatform/Spring/apps/operationStatuses/read | Read resource operation Status |
> | Microsoft.AppPlatform/Spring/buildpackBindings/read | Get the BuildpackBinding for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/buildServices/read | Get the Build Services for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/buildServices/getResourceUploadUrl/action | Get the Upload URL of a specific Microsoft Azure Spring Apps build |
+> | Microsoft.AppPlatform/Spring/buildServices/write | Create or Update the Build Services for a specific Azure Spring Apps service instance |
> | Microsoft.AppPlatform/Spring/buildServices/agentPools/read | Get the Agent Pools for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/buildServices/agentPools/write | Create or update the Agent Pools for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/buildServices/builders/read | Get the Builders for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/buildServices/builders/write | Create or update the Builders for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/buildServices/builders/delete | Delete the Builders for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/listUsingDeployments/action | List deployments using the Builders for a specific Azure Spring Apps service instance |
> | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/read | Get the BuildpackBinding for a specific Azure Spring Apps service instance Builder | > | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/write | Create or update the BuildpackBinding for a specific Azure Spring Apps service instance Builder | > | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/delete | Delete the BuildpackBinding for a specific Azure Spring Apps service instance Builder | > | Microsoft.AppPlatform/Spring/buildServices/builds/read | Get the Builds for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/buildServices/builds/write | Create or update the Builds for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/delete | Delete the Builds for a specific Azure Spring Apps service instance |
> | Microsoft.AppPlatform/Spring/buildServices/builds/results/read | Get the Build Results for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/buildServices/builds/results/getLogFileUrl/action | Get the Log File URL of a specific Microsoft Azure Spring Apps build result | > | Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks/read | Get the Supported Buildpacks for a specific Azure Spring Apps service instance |
Azure service: [Azure Spring Apps](../spring-apps/index.yml)
> | Microsoft.AppPlatform/Spring/certificates/read | Get the certificates for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/configServers/read | Get the config server for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/configServers/write | Create or update the config server for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configServers/operationResults/read | Read resource operation result |
+> | Microsoft.AppPlatform/Spring/configServers/operationStatuses/read | Read resource operation Status |
> | Microsoft.AppPlatform/Spring/configurationServices/read | Get the Application Configuration Services for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/configurationServices/write | Create or update the Application Configuration Service for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/configurationServices/delete | Delete the Application Configuration Service for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/configurationServices/validate/action | Validate the settings for a specific Application Configuration Service |
+> | Microsoft.AppPlatform/Spring/containerRegistries/read | Get the Container Registry for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/containerRegistries/write | Create or update the Container Registry for a specific Azure Spring Apps service instance |
> | Microsoft.AppPlatform/Spring/deployments/read | Get the deployments for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/detectors/read | Get the detectors for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/devToolPortals/read | Get the Dev Tool Portal for a specific Azure Spring Apps service instance |
Azure service: [Azure Spring Apps](../spring-apps/index.yml)
> | Microsoft.AppPlatform/Spring/gateways/delete | Delete the Spring Cloud Gateway for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/gateways/validateDomain/action | Validate the Spring Cloud Gateway domain for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/gateways/listEnvSecrets/action | List environment variables secret of the Spring Cloud Gateway for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/restart/action | Restart the Spring Cloud Gateway for a specific Azure Spring Apps service instance |
> | Microsoft.AppPlatform/Spring/gateways/domains/read | Get the Spring Cloud Gateways domain for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/gateways/domains/write | Create or update the Spring Cloud Gateway domain for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/gateways/domains/delete | Delete the Spring Cloud Gateway domain for a specific Azure Spring Apps service instance |
Azure service: [Azure Spring Apps](../spring-apps/index.yml)
> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/delete | Delete the Spring Cloud Gateway route config for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/monitoringSettings/read | Get the monitoring setting for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/monitoringSettings/write | Create or update the monitoring setting for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/operationResults/read | Read resource operation result |
+> | Microsoft.AppPlatform/Spring/operationStatuses/read | Read resource operation Status |
> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/diagnosticSettings/write | Create or update the diagnostic settings for a specific Azure Spring Apps service instance | > | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/logDefinitions/read | Get definitions of logs from Azure Spring Apps service instance |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/customhostnameSites/Read | Get info about custom hostnames under subscription. | > | Microsoft.Web/deletedSites/Read | Get the properties of a Deleted Web App | > | microsoft.web/deploymentlocations/read | Get Deployment Locations. |
+> | Microsoft.Web/freeTrialStaticWebApps/write | Creates or updates a free trial static web app. |
+> | Microsoft.Web/freeTrialStaticWebApps/upgrade/action | Upgrades a free trial static web app. |
+> | Microsoft.Web/freeTrialStaticWebApps/read | Lists free trial static web apps. |
+> | Microsoft.Web/freeTrialStaticWebApps/delete | Deletes a free trial static web app. |
> | microsoft.web/functionappstacks/read | Get Function App Stacks. | > | Microsoft.Web/geoRegions/Read | Get the list of Geo regions. | > | Microsoft.Web/hostingEnvironments/Read | Get the properties of an App Service Environment |
Azure service: [Azure Kubernetes Service (AKS)](../aks/index.yml)
> | Microsoft.ContainerService/managedClusters/delete | Deletes a managed cluster | > | Microsoft.ContainerService/managedClusters/start/action | Starts a managed cluster | > | Microsoft.ContainerService/managedClusters/stop/action | Stops a managed cluster |
-> | Microsoft.ContainerService/managedClusters/abort/action | Abort latest operation in managed cluster |
+> | Microsoft.ContainerService/managedClusters/abort/action | Latest ongoing operation on managed cluster gets aborted |
> | Microsoft.ContainerService/managedClusters/listClusterAdminCredential/action | List the clusterAdmin credential of a managed cluster | > | Microsoft.ContainerService/managedClusters/listClusterUserCredential/action | List the clusterUser credential of a managed cluster | > | Microsoft.ContainerService/managedClusters/listClusterMonitoringUserCredential/action | List the clusterMonitoringUser credential of a managed cluster |
Azure service: [Azure Kubernetes Service (AKS)](../aks/index.yml)
> | Microsoft.ContainerService/managedClusters/agentPools/read | Gets an agent pool | > | Microsoft.ContainerService/managedClusters/agentPools/write | Creates a new agent pool or updates an existing one | > | Microsoft.ContainerService/managedClusters/agentPools/delete | Deletes an agent pool |
-> | Microsoft.ContainerService/managedClusters/agentPools/abort/action | Abort latest operation in agent pool |
+> | Microsoft.ContainerService/managedClusters/agentPools/abort/action | Latest ongoing operation on agent pool gets aborted |
> | Microsoft.ContainerService/managedClusters/agentPools/upgradeNodeImageVersion/write | Upgrade the node image version of agent pool | > | Microsoft.ContainerService/managedClusters/agentPools/upgradeProfiles/read | Gets the upgrade profile of the Agent Pool | > | Microsoft.ContainerService/managedClusters/availableAgentPoolVersions/read | Gets the available agent pool versions of the cluster |
Azure service: [Azure Cache for Redis](../azure-cache-for-redis/index.yml)
> | Microsoft.Cache/redis/stop/action | Stop an Azure Cache for Redis, potentially with data loss. | > | Microsoft.Cache/redis/start/action | Start an Azure Cache for Redis | > | Microsoft.Cache/redis/PrivateEndpointConnectionsApproval/action | Approve Private Endpoint Connections |
+> | Microsoft.Cache/redis/accessPolicies/read | Operation Read Redis Access Policies |
+> | Microsoft.Cache/redis/accessPolicies/write | Operation Write Redis Access Policies |
+> | Microsoft.Cache/redis/accessPolicies/delete | Operation Delete Redis Access Policies Long |
+> | Microsoft.Cache/redis/accessPolicyAssignments/read | Operation Read Redis Access Policy Assignments Long |
+> | Microsoft.Cache/redis/accessPolicyAssignments/write | Operation Write Redis Access Policy Assignments Long |
+> | Microsoft.Cache/redis/accessPolicyAssignments/delete | Operation Delete Access Policy Assignments Long |
> | Microsoft.Cache/redis/detectors/read | Get the properties of one or all detectors for an Azure Cache for Redis cache | > | Microsoft.Cache/redis/eventGridFilters/read | Get Redis Cache Event Grid Filter | > | Microsoft.Cache/redis/eventGridFilters/write | Update Redis Cache Event Grid Filters |
Azure service: [Azure Database for MySQL](../mysql/index.yml)
> | Microsoft.DBforMySQL/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection | > | Microsoft.DBforMySQL/register/action | Register MySQL Resource Provider | > | Microsoft.DBforMySQL/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. |
-> | Microsoft.DBforMySQL/flexibleServers/backupAndExport/action | Creates a server backup for long term with specific backup name and export it. |
-> | Microsoft.DBforMySQL/flexibleServers/validateBackup/action | Validate that the server is ready for backup. |
+> | Microsoft.DBforMySQL/flexibleServers/resetGtid/action | |
> | Microsoft.DBforMySQL/flexibleServers/read | Returns the list of servers or gets the properties for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/write | Creates a server with the specified parameters or updates the properties or tags for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/delete | Deletes an existing server. |
+> | Microsoft.DBforMySQL/flexibleServers/checkServerVersionUpgradeAvailability/action | |
+> | Microsoft.DBforMySQL/flexibleServers/backupAndExport/action | Creates a server backup for long term with specific backup name and export it. |
+> | Microsoft.DBforMySQL/flexibleServers/validateBackup/action | Validate that the server is ready for backup. |
> | Microsoft.DBforMySQL/flexibleServers/checkHaReplica/action | | > | Microsoft.DBforMySQL/flexibleServers/updateConfigurations/action | Updates configurations for the specified server. | > | Microsoft.DBforMySQL/flexibleServers/cutoverMigration/action | Performs a migration cutover with the specified parameters. |
Azure service: [Azure Database for MySQL](../mysql/index.yml)
> | Microsoft.DBforMySQL/flexibleServers/privateEndpointConnections/read | Returns the list of private endpoint connections or gets the properties for the specified private endpoint connection. | > | Microsoft.DBforMySQL/flexibleServers/privateEndpointConnections/read | | > | Microsoft.DBforMySQL/flexibleServers/privateEndpointConnections/delete | Deletes an existing private endpoint connection |
+> | Microsoft.DBforMySQL/flexibleServers/privateEndpointConnections/write | Approves or rejects an existing private endpoint connection |
> | Microsoft.DBforMySQL/flexibleServers/privateLinkResources/read | | > | Microsoft.DBforMySQL/flexibleServers/privateLinkResources/read | Get the private link resources for the corresponding MySQL Server | > | Microsoft.DBforMySQL/flexibleServers/providers/Microsoft.Insights/diagnosticSettings/read | Gets the disagnostic setting for the resource |
Azure service: [Azure Database for MySQL](../mysql/index.yml)
> | Microsoft.DBforMySQL/flexibleServers/replicas/read | Returns the list of read replicas for a MySQL server | > | Microsoft.DBforMySQL/locations/checkVirtualNetworkSubnetUsage/action | Checks the subnet usage for speicifed delegated virtual network. | > | Microsoft.DBforMySQL/locations/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. |
+> | Microsoft.DBforMySQL/locations/listMigrations/action | Return the List of MySQL scheduled auto migrations |
> | Microsoft.DBforMySQL/locations/assessForMigration/action | Performs a migration assessment with the specified parameters. |
+> | Microsoft.DBforMySQL/locations/updateMigration/action | Updates the scheduled migration for MySQL Server |
> | Microsoft.DBforMySQL/locations/administratorAzureAsyncOperation/read | Gets in-progress operations on MySQL server administrators | > | Microsoft.DBforMySQL/locations/administratorOperationResults/read | Return MySQL Server administrator operation results | > | Microsoft.DBforMySQL/locations/azureAsyncOperation/read | Return MySQL Server Operation Results |
Azure service: [Azure Database for PostgreSQL](../postgresql/index.yml)
> | Microsoft.DBforPostgreSQL/flexibleServers/migrations/read | List of migration workflows for the specified database server. | > | Microsoft.DBforPostgreSQL/flexibleServers/migrations/write | Update the properties for the specified migration. | > | Microsoft.DBforPostgreSQL/flexibleServers/migrations/delete | Deletes an existing migration workflow. |
+> | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionProxies/read | Returns the list of private endpoint connection proxies or gets the properties for the specified private endpoint connection proxy. |
+> | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionProxies/delete | Deletes an existing private endpoint connection proxy resource. |
+> | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionProxies/write | Creates a private endpoint connection proxy with the specified parameters or updates the properties or tags for the specified private endpoint connection proxy |
+> | Microsoft.DBforPostgreSQL/flexibleServers/privateEndpointConnectionProxies/validate/action | Validates a private endpoint connection create call from NRP side |
+> | Microsoft.DBforPostgreSQL/flexibleServers/privateLinkResources/read | Return a list containing private link resource or gets the specified private link resource. |
> | Microsoft.DBforPostgreSQL/flexibleServers/providers/Microsoft.Insights/diagnosticSettings/read | Gets the disagnostic setting for the resource | > | Microsoft.DBforPostgreSQL/flexibleServers/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource | > | Microsoft.DBforPostgreSQL/flexibleServers/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for PostgreSQL servers |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/locations/managedInstancePrivateEndpointConnectionProxyOperationResults/read | Gets the result for a private endpoint connection proxy operation | > | Microsoft.Sql/locations/managedLedgerDigestUploadsAzureAsyncOperation/read | Gets in-progress operations of ledger digest upload settings | > | Microsoft.Sql/locations/managedLedgerDigestUploadsOperationResults/read | Gets in-progress operations of ledger digest upload settings |
+> | Microsoft.Sql/locations/managedShortTermRetentionPolicyAzureAsyncOperation/read | Gets the status of a short term retention policy operation |
> | Microsoft.Sql/locations/managedShortTermRetentionPolicyOperationResults/read | Gets the status of a short term retention policy operation | > | Microsoft.Sql/locations/managedTransparentDataEncryptionAzureAsyncOperation/read | Gets in-progress operations on managed database transparent data encryption | > | Microsoft.Sql/locations/managedTransparentDataEncryptionOperationResults/read | Gets in-progress operations on managed database transparent data encryption |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/locations/stopManagedInstanceOperationResults/read | Gets Azure SQL Managed Instance Stop operation result. | > | Microsoft.Sql/locations/syncAgentOperationResults/read | Retrieve result of the sync agent resource operation | > | Microsoft.Sql/locations/syncDatabaseIds/read | Retrieve the sync database ids for a particular region and subscription |
+> | Microsoft.Sql/locations/syncGroupAzureAsyncOperation/read | Retrieve result of the sync group resource operation |
> | Microsoft.Sql/locations/syncGroupOperationResults/read | Retrieve result of the sync group resource operation | > | Microsoft.Sql/locations/syncMemberOperationResults/read | Retrieve result of the sync member resource operation | > | Microsoft.Sql/locations/timeZones/read | Return the list of managed instance time zones by location. |
Azure service: [Azure Bot Service](/azure/bot-service/)
> | Microsoft.BotService/botServices/write | Write a Bot Service | > | Microsoft.BotService/botServices/delete | Delete a Bot Service | > | Microsoft.BotService/botServices/createemailsigninurl/action | Create a sign in url for email channel modern auth |
+> | Microsoft.BotService/botServices/joinPerimeter/action | Description for action of Join Perimeter |
> | Microsoft.BotService/botServices/channels/read | Read a Bot Service Channel | > | Microsoft.BotService/botServices/channels/write | Write a Bot Service Channel | > | Microsoft.BotService/botServices/channels/delete | Delete a Bot Service Channel |
Azure service: [Azure Bot Service](/azure/bot-service/)
> | Microsoft.BotService/botServices/connections/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource | > | Microsoft.BotService/botServices/connections/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for &lt;Name of the resource&gt; | > | Microsoft.BotService/botServices/connections/providers/Microsoft.Insights/metricDefinitions/read | Creates or updates the diagnostic setting for the resource |
+> | Microsoft.BotService/botServices/networkSecurityPerimeterAssociationProxies/read | Read a Network Security Perimeter Association Proxies resource |
+> | Microsoft.BotService/botServices/networkSecurityPerimeterAssociationProxies/write | Write a Network Security Perimeter Association Proxies resource |
+> | Microsoft.BotService/botServices/networkSecurityPerimeterAssociationProxies/delete | Delete a Network Security Perimeter Association Proxies resource |
+> | Microsoft.BotService/botServices/networkSecurityPerimeterConfigurations/read | Read a Network Security Perimeter Configurations resource |
+> | Microsoft.BotService/botServices/networkSecurityPerimeterConfigurations/reconcile/action | Reconcile a Network Security Perimeter Configurations resource |
> | Microsoft.BotService/botServices/privateEndpointConnectionProxies/read | Read a connection proxy resource | > | Microsoft.BotService/botServices/privateEndpointConnectionProxies/write | Write a connection proxy resource | > | Microsoft.BotService/botServices/privateEndpointConnectionProxies/delete | Delete a connection proxy resource |
Azure service: [Azure Bot Service](/azure/bot-service/)
> | Microsoft.BotService/listqnamakerendpointkeys/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource | > | Microsoft.BotService/listqnamakerendpointkeys/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for &lt;Name of the resource&gt; | > | Microsoft.BotService/listqnamakerendpointkeys/providers/Microsoft.Insights/metricDefinitions/read | Creates or updates the diagnostic setting for the resource |
+> | Microsoft.BotService/locations/notifyNetworkSecurityPerimeterUpdatesAvailable/action | Notify Network Security Perimeter Updates Available |
> | Microsoft.BotService/locations/operationresults/read | Read the status of an asynchronous operation | > | Microsoft.BotService/operationresults/read | Read the status of an asynchronous operation | > | Microsoft.BotService/Operations/read | Read the operations for all resource types |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/featurestores/write | Creates or Updates the Machine Learning Services FeatureStore(s) | > | Microsoft.MachineLearningServices/featurestores/delete | Deletes the Machine Learning Services FeatureStore(s) | > | Microsoft.MachineLearningServices/featurestores/checkNameAvailability/read | Checks the Machine Learning Services FeatureStore name availability |
-> | Microsoft.MachineLearningServices/featurestores/featureentities/read | Gets the Machine Learning Services FeatureEntity(s) |
-> | Microsoft.MachineLearningServices/featurestores/featureentities/write | Creates or Updates the Machine Learning Services FeatureEntity(s) |
-> | Microsoft.MachineLearningServices/featurestores/featureentities/delete | Delete the Machine Learning Services FeatureEntity(s) |
-> | Microsoft.MachineLearningServices/featurestores/featuresets/read | Gets the Machine Learning Services FeatureSet(s) |
-> | Microsoft.MachineLearningServices/featurestores/featuresets/write | Creates or Updates the Machine Learning Services FeatureSet(s) |
-> | Microsoft.MachineLearningServices/featurestores/featuresets/delete | Delete the Machine Learning Services FeatureSet(s) |
> | Microsoft.MachineLearningServices/locations/deleteVirtualNetworkOrSubnets/action | Deleted the references to virtual networks/subnets associated with Machine Learning Service Workspaces. | > | Microsoft.MachineLearningServices/locations/updateQuotas/action | Update quota for each VM family at a subscription or a workspace level. | > | Microsoft.MachineLearningServices/locations/computeoperationsstatus/read | Gets the status of a particular compute operation |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/workspaces/resynckeys/action | Resync secrets for a Machine Learning Services Workspace | > | Microsoft.MachineLearningServices/workspaces/listStorageAccountKeys/action | List Storage Account keys for a Machine Learning Services Workspace | > | Microsoft.MachineLearningServices/workspaces/privateEndpointConnectionsApproval/action | Approve or reject a connection to a Private Endpoint resource of Microsoft.Network provider |
+> | Microsoft.MachineLearningServices/workspaces/featuresets/action | Allows action on the Machine Learning Services FeatureSet(s) |
+> | Microsoft.MachineLearningServices/workspaces/featurestoreentities/action | Allows action on the Machine Learning Services FeatureEntity(s) |
> | Microsoft.MachineLearningServices/workspaces/assets/stage/write | Updates the stage on a Machine Learning Services workspace asset | > | Microsoft.MachineLearningServices/workspaces/batchEndpoints/read | Gets batch inference endpoints in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/batchEndpoints/write | Creates or updates batch inference endpoint in Machine Learning Services Workspace(s) |
Azure service: [Machine Learning](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/workspaces/experiments/runs/write | Creates or updates runs in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/experiments/runs/delete | Deletes runs in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/features/read | Gets all enabled features for a Machine Learning Services Workspace |
+> | Microsoft.MachineLearningServices/workspaces/featuresets/read | Gets the Machine Learning Services FeatureSet(s) |
+> | Microsoft.MachineLearningServices/workspaces/featuresets/write | Creates or Updates the Machine Learning Services FeatureSet(s) |
+> | Microsoft.MachineLearningServices/workspaces/featuresets/delete | Delete the Machine Learning Services FeatureSet(s) |
+> | Microsoft.MachineLearningServices/workspaces/featurestoreentities/read | Gets the Machine Learning Services FeatureEntity(s) |
+> | Microsoft.MachineLearningServices/workspaces/featurestoreentities/write | Creates or Updates the Machine Learning Services FeatureEntity(s) |
+> | Microsoft.MachineLearningServices/workspaces/featurestoreentities/delete | Delete the Machine Learning Services FeatureEntity(s) |
> | Microsoft.MachineLearningServices/workspaces/jobs/read | Reads Jobs in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/jobs/write | Create or Update Jobs in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/jobs/delete | Deletes Jobs in Machine Learning Services Workspace(s) |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/applynetworkconfigurationupdates/action | Updates the Microsoft.ApiManagement resources running in Virtual Network to pick updated Network Settings. | > | Microsoft.ApiManagement/service/users/action | Register a new user | > | Microsoft.ApiManagement/service/notifications/action | Sends notification to a specified user |
+> | Microsoft.ApiManagement/service/validatePolicies/action | Validates Tenant Policy Restrictions |
> | Microsoft.ApiManagement/service/apis/read | Lists all APIs of the API Management service instance. or Gets the details of the API specified by its identifier. | > | Microsoft.ApiManagement/service/apis/write | Creates new or updates existing specified API of the API Management service instance. or Updates the specified API of the API Management service instance. | > | Microsoft.ApiManagement/service/apis/delete | Deletes the specified API of the API Management service instance. |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/diagnostics/read | Lists all diagnostics of the API Management service instance. or Gets the details of the Diagnostic specified by its identifier. | > | Microsoft.ApiManagement/service/diagnostics/write | Creates a new Diagnostic or updates an existing one. or Updates the details of the Diagnostic specified by its identifier. | > | Microsoft.ApiManagement/service/diagnostics/delete | Deletes the specified Diagnostic. |
+> | Microsoft.ApiManagement/service/documentations/read | Lists all Documentations of the API Management service instance. or Gets the details of the documentation specified by its identifier. |
+> | Microsoft.ApiManagement/service/documentations/write | Creates or Updates a documentation. or Updates the specified documentation of the API Management service instance. |
+> | Microsoft.ApiManagement/service/documentations/delete | Delete documentation. |
> | Microsoft.ApiManagement/service/eventGridFilters/write | Set Event Grid Filters | > | Microsoft.ApiManagement/service/eventGridFilters/delete | Delete Event Grid Filters | > | Microsoft.ApiManagement/service/eventGridFilters/read | Get Event Grid Filter |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/gateways/regenerateKey/action | Regenerates specified gateway key invalidationg any tokens created with it. | > | Microsoft.ApiManagement/service/gateways/generateToken/action | Gets the Shared Access Authorization Token for the gateway. | > | Microsoft.ApiManagement/service/gateways/token/action | Gets the Shared Access Authorization Token for the gateway. |
-> | Microsoft.ApiManagement/service/gateways/resetDebugCredentials/action | Forces gateway to reset all issued debug credentials |
+> | Microsoft.ApiManagement/service/gateways/invalidateDebugCredentials/action | Forces gateway to reset all issued debug credentials |
> | Microsoft.ApiManagement/service/gateways/getDebugCredentials/action | Issue a debug credentials for requests | > | Microsoft.ApiManagement/service/gateways/apis/read | Lists a collection of the APIs associated with a gateway. | > | Microsoft.ApiManagement/service/gateways/apis/write | Adds an API to the specified Gateway. |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/policyFragments/write | Creates or updates a policy fragment. | > | Microsoft.ApiManagement/service/policyFragments/delete | Deletes a policy fragment. | > | Microsoft.ApiManagement/service/policyFragments/listReferences/action | Lists policy resources that reference the policy fragment. |
+> | Microsoft.ApiManagement/service/policyRestrictions/read | Lists all the Global Policy Restrictions of the Api Management service. or Get the Global policy restriction of the Api Management service. |
+> | Microsoft.ApiManagement/service/policyRestrictions/write | Creates or updates the global policy restriction of the Api Management service. or Updates the global policy restriction of the Api Management service. |
+> | Microsoft.ApiManagement/service/policyRestrictions/delete | Deletes the global policy restriction of the Api Management Service. |
> | Microsoft.ApiManagement/service/policySnippets/read | Lists all policy snippets. | > | Microsoft.ApiManagement/service/portalConfigs/read | Lists a collection of developer portal config entities. or Gets developer portal config specified by its identifier. | > | Microsoft.ApiManagement/service/portalConfigs/write | Creates a new developer portal config. or Updates the description of specified portal config or makes it current. |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/workspaces/apiVersionSets/write | Creates or Updates a Api Version Set. or Updates the details of the Api VersionSet specified by its identifier. | > | Microsoft.ApiManagement/service/workspaces/apiVersionSets/delete | Deletes specific Api Version Set. | > | Microsoft.ApiManagement/service/workspaces/apiVersionSets/versions/read | Get list of version entities |
+> | Microsoft.ApiManagement/service/workspaces/documentations/read | Lists all Documentations of the API Management service instance. or Gets the details of the documentation specified by its identifier. |
+> | Microsoft.ApiManagement/service/workspaces/documentations/write | Creates or Updates a documentation. or Updates the specified documentation of the API Management service instance. |
+> | Microsoft.ApiManagement/service/workspaces/documentations/delete | Delete documentation. |
> | Microsoft.ApiManagement/service/workspaces/groups/read | Lists a collection of groups defined within a service instance. or Gets the details of the group specified by its identifier. | > | Microsoft.ApiManagement/service/workspaces/groups/write | Creates or Updates a group. or Updates the details of the group specified by its identifier. | > | Microsoft.ApiManagement/service/workspaces/groups/delete | Deletes specific group of the API Management service instance. |
Azure service: [Azure Stack HCI](/azure-stack/hci/)
> | Microsoft.AzureStackHCI/VirtualNetworks/Read | Gets/Lists virtual networks resource | > | **DataAction** | **Description** | > | Microsoft.AzureStackHCI/Clusters/WACloginAsAdmin/Action | Manage OS of HCI resource via Windows Admin Center as an administrator |
-> | Microsoft.AzureStackHCI/virtualMachines/WACloginAsAdmin/Action | Manage ARC enabled VM resources on HCI via Windows Admin Center as an administrator |
### Microsoft.DataBoxEdge
Azure service: [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md
> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/addons/write | Creates or updates the addons | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/addons/delete | Deletes the addons | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/addons/operationResults/read | Lists or gets the operation result |
+> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/migrate/operationResults/read | Lists or gets the operation result |
> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/monitoringConfig/write | Creates or updates the monitoring configuration | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/monitoringConfig/delete | Deletes the monitoring configuration | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/roles/monitoringConfig/read | Lists or gets the monitoring configuration |
Azure service: [Security Center](../security-center/index.yml)
> | Microsoft.Security/automations/write | Creates or updates the automation for the scope | > | Microsoft.Security/automations/delete | Deletes the automation for the scope | > | Microsoft.Security/automations/validate/action | Validates the automation model for the scope |
-> | Microsoft.Security/automations/read | Gets the defenderforstoragesettings for the scope |
-> | Microsoft.Security/automations/write | Creates or updates the defenderforstoragesettings for the scope |
-> | Microsoft.Security/automations/delete | Deletes the defenderforstoragesettings for the scope |
-> | Microsoft.Security/automations/read | Gets the datascanners for the scope |
-> | Microsoft.Security/automations/write | Creates or updates the datascanners for the scope |
-> | Microsoft.Security/automations/delete | Deletes the datascanners for the scope |
> | Microsoft.Security/autoProvisioningSettings/read | Get security auto provisioning setting for the subscription | > | Microsoft.Security/autoProvisioningSettings/write | Create or update security auto provisioning setting for the subscription | > | Microsoft.Security/complianceResults/read | Gets the compliance results for the resource |
+> | Microsoft.Security/datascanners/read | Gets the datascanners for the scope |
+> | Microsoft.Security/datascanners/write | Creates or updates the datascanners for the scope |
+> | Microsoft.Security/datascanners/delete | Deletes the datascanners for the scope |
+> | Microsoft.Security/defenderforstoragesettings/read | Gets the defenderforstoragesettings for the scope |
+> | Microsoft.Security/defenderforstoragesettings/write | Creates or updates the defenderforstoragesettings for the scope |
+> | Microsoft.Security/defenderforstoragesettings/delete | Deletes the defenderforstoragesettings for the scope |
> | Microsoft.Security/deviceSecurityGroups/write | Creates or updates IoT device security groups | > | Microsoft.Security/deviceSecurityGroups/delete | Deletes IoT device security groups | > | Microsoft.Security/deviceSecurityGroups/read | Gets IoT device security groups |
Azure service: [Security Center](../security-center/index.yml)
> | Microsoft.Security/pricings/read | Gets the pricing settings for the scope | > | Microsoft.Security/pricings/write | Updates the pricing settings for the scope | > | Microsoft.Security/pricings/delete | Deletes the pricing settings for the scope |
+> | Microsoft.Security/pricings/securityoperators/read | Gets the security operators for the scope |
+> | Microsoft.Security/pricings/securityoperators/write | Updates the security operators for the scope |
+> | Microsoft.Security/pricings/securityoperators/delete | Deletes the security operators for the scope |
> | Microsoft.Security/secureScoreControlDefinitions/read | Get secure score control definition | > | Microsoft.Security/secureScoreControls/read | Get calculated secure score control for your subscription | > | Microsoft.Security/secureScores/read | Get calculated secure score for your subscription |
Azure service: [Security Center](../security-center/index.yml)
> | Microsoft.Security/securitySolutionsReferenceData/read | Gets the security solutions reference data | > | Microsoft.Security/securityStatuses/read | Gets the security health statuses for Azure resources | > | Microsoft.Security/securityStatusesSummaries/read | Gets the security statuses summaries for the scope |
+> | Microsoft.Security/sensitivitySettings/read | Gets tenant level sensitivity settings |
+> | Microsoft.Security/sensitivitySettings/write | Updates tenant level sensitivity settings |
> | Microsoft.Security/serverVulnerabilityAssessments/read | Get server vulnerability assessments onboarding status on a given resource | > | Microsoft.Security/serverVulnerabilityAssessments/write | Create or update a server vulnerability assessments solution on resource | > | Microsoft.Security/serverVulnerabilityAssessments/delete | Remove a server vulnerability assessments solution from a resource |
Azure service: [Microsoft Sentinel](../sentinel/index.yml)
> | Microsoft.SecurityInsights/fileimports/read | Reads File Import objects | > | Microsoft.SecurityInsights/fileimports/write | Creates or updates a File Import | > | Microsoft.SecurityInsights/fileimports/delete | Deletes a File Import |
+> | Microsoft.SecurityInsights/hunts/read | Get Hunts |
+> | Microsoft.SecurityInsights/hunts/write | Create Hunts |
+> | Microsoft.SecurityInsights/hunts/delete | Deletes Hunts |
+> | Microsoft.SecurityInsights/hunts/comments/read | Get Hunt Comments |
+> | Microsoft.SecurityInsights/hunts/comments/write | Create Hunt Comments |
+> | Microsoft.SecurityInsights/hunts/comments/delete | Deletes Hunt Comments |
+> | Microsoft.SecurityInsights/hunts/relations/read | Get Hunt Relations |
+> | Microsoft.SecurityInsights/hunts/relations/write | Create Hunt Relations |
+> | Microsoft.SecurityInsights/hunts/relations/delete | Deletes Hunt Relations |
> | Microsoft.SecurityInsights/incidents/read | Gets an incident | > | Microsoft.SecurityInsights/incidents/write | Updates an incident | > | Microsoft.SecurityInsights/incidents/delete | Deletes an incident |
Azure service: [Microsoft Defender for Cloud](../defender-for-cloud/index.yml)
> | Microsoft.SecurityDevOps/register/action | Register the subscription for Microsoft.SecurityDevOps | > | Microsoft.SecurityDevOps/unregister/action | Unregister the subscription for Microsoft.SecurityDevOps | > | Microsoft.SecurityDevOps/azureDevOpsConnectors/read | read azureDevOpsConnectors |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/read | read azureDevOpsConnectors |
> | Microsoft.SecurityDevOps/azureDevOpsConnectors/write | write azureDevOpsConnectors | > | Microsoft.SecurityDevOps/azureDevOpsConnectors/delete | delete azureDevOpsConnectors |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/write | write azureDevOpsConnectors |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/read | read azureDevOpsConnectors |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/read | read orgs |
> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/read | read orgs | > | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/write | write orgs |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/write | write orgs |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/projects/read | read projects |
> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/projects/read | read projects | > | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/projects/write | write projects |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/projects/write | write projects |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/projects/repos/read | read repos |
> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/projects/repos/read | read repos | > | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/projects/repos/write | write repos |
+> | Microsoft.SecurityDevOps/azureDevOpsConnectors/orgs/projects/repos/write | write repos |
> | Microsoft.SecurityDevOps/azureDevOpsConnectors/repos/read | read repos | > | Microsoft.SecurityDevOps/azureDevOpsConnectors/stats/read | read stats | > | Microsoft.SecurityDevOps/gitHubConnectors/read | read gitHubConnectors |
+> | Microsoft.SecurityDevOps/gitHubConnectors/read | read gitHubConnectors |
> | Microsoft.SecurityDevOps/gitHubConnectors/write | write gitHubConnectors | > | Microsoft.SecurityDevOps/gitHubConnectors/delete | delete gitHubConnectors |
+> | Microsoft.SecurityDevOps/gitHubConnectors/write | write gitHubConnectors |
+> | Microsoft.SecurityDevOps/gitHubConnectors/read | read gitHubConnectors |
> | Microsoft.SecurityDevOps/gitHubConnectors/gitHubRepos/read | Returns a list of monitored GitHub repositories. |
+> | Microsoft.SecurityDevOps/gitHubConnectors/gitHubRepos/read | Returns a monitored GitHub repository resource for a given ID. |
> | Microsoft.SecurityDevOps/gitHubConnectors/owners/read | read owners |
+> | Microsoft.SecurityDevOps/gitHubConnectors/owners/read | read owners |
+> | Microsoft.SecurityDevOps/gitHubConnectors/owners/write | write owners |
> | Microsoft.SecurityDevOps/gitHubConnectors/owners/write | write owners | > | Microsoft.SecurityDevOps/gitHubConnectors/owners/repos/read | read repos |
+> | Microsoft.SecurityDevOps/gitHubConnectors/owners/repos/read | read repos |
+> | Microsoft.SecurityDevOps/gitHubConnectors/owners/repos/write | write repos |
> | Microsoft.SecurityDevOps/gitHubConnectors/owners/repos/write | write repos | > | Microsoft.SecurityDevOps/gitHubConnectors/repos/read | read repos | > | Microsoft.SecurityDevOps/gitHubConnectors/stats/read | read stats |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.Insights/DataCollectionEndpoints/Read | Read a data collection endpoint | > | Microsoft.Insights/DataCollectionEndpoints/Write | Create or update a data collection endpoint | > | Microsoft.Insights/DataCollectionEndpoints/Delete | Delete a data collection endpoint |
+> | Microsoft.Insights/DataCollectionEndpoints/TriggerFailover/Action | Trigger failover on a data collection endpoint |
> | Microsoft.Insights/DataCollectionEndpoints/NetworkSecurityPerimeterAssociationProxies/Read | Read a data collection endpoint NSP association proxy | > | Microsoft.Insights/DataCollectionEndpoints/NetworkSecurityPerimeterAssociationProxies/Write | Create or update a data collection endpoint NSP association proxy | > | Microsoft.Insights/DataCollectionEndpoints/NetworkSecurityPerimeterAssociationProxies/Delete | Delete a data collection endpoint NSP association proxy |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.Insights/ScheduledQueryRules/networkSecurityPerimeterConfigurations/Read | Reading a network security perimeter configuration for scheduled query rules | > | Microsoft.Insights/ScheduledQueryRules/networkSecurityPerimeterConfigurations/Write | Writing a network security perimeter configuration for scheduled query rules | > | Microsoft.Insights/ScheduledQueryRules/networkSecurityPerimeterConfigurations/Delete | Deleting a network security perimeter configuration for scheduled query rules |
+> | Microsoft.Insights/TenantActionGroups/Write | Create or update a tenant action group |
+> | Microsoft.Insights/TenantActionGroups/Delete | Delete a tenant action group |
+> | Microsoft.Insights/TenantActionGroups/Read | Read a tenant action group |
> | Microsoft.Insights/Tenants/Register/Action | Initializes the Microsoft Insights provider | > | Microsoft.Insights/topology/Read | Read Topology | > | Microsoft.Insights/transactions/Read | Read Transactions |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Action | Description | > | | | > | Microsoft.OperationalInsights/register/action | Register a subscription to a resource provider. |
+> | Microsoft.OperationalInsights/unregister/action | UnRegister a subscription to a resource provider. |
+> | Microsoft.OperationalInsights/querypacks/action | Perform Query Pack Action. |
> | microsoft.operationalinsights/unregister/action | Unregisters the subscription. | > | microsoft.operationalinsights/querypacks/action | Perform Query Packs Actions. | > | microsoft.operationalinsights/availableservicetiers/read | Get the available service tiers. | > | Microsoft.OperationalInsights/clusters/read | Get Cluster | > | Microsoft.OperationalInsights/clusters/write | Create or updates a Cluster | > | Microsoft.OperationalInsights/clusters/delete | Delete Cluster |
-> | Microsoft.OperationalInsights/deletedWorkspaces/read | Lists workspaces in soft deleted period. |
-> | Microsoft.OperationalInsights/linkTargets/read | Lists workspaces in soft deleted period. |
+> | Microsoft.OperationalInsights/deletedworkspaces/read | Lists workspaces in soft deleted period. |
+> | Microsoft.OperationalInsights/linktargets/read | Lists workspaces in soft deleted period. |
+> | Microsoft.OperationalInsights/locations/operationstatuses/read | Get Log Analytics Azure Async Operation Status |
> | microsoft.operationalinsights/locations/operationStatuses/read | Get Log Analytics Azure Async Operation Status. |
+> | Microsoft.OperationalInsights/operations/read | Lists all of the available OperationalInsights REST API operations. |
> | microsoft.operationalinsights/operations/read | Lists all of the available OperationalInsights REST API operations. |
+> | Microsoft.OperationalInsights/querypacks/read | Get Query Pack. |
+> | Microsoft.OperationalInsights/querypacks/write | Create or update Query Pack. |
+> | Microsoft.OperationalInsights/querypacks/delete | Delete Query Pack. |
> | microsoft.operationalinsights/querypacks/write | Create or Update Query Packs. | > | microsoft.operationalinsights/querypacks/read | Get Query Packs. | > | microsoft.operationalinsights/querypacks/delete | Delete Query Packs. |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/write | Creates a new workspace or links to an existing workspace by providing the customer id from the existing workspace. | > | Microsoft.OperationalInsights/workspaces/read | Gets an existing workspace | > | Microsoft.OperationalInsights/workspaces/delete | Deletes a workspace. If the workspace was linked to an existing workspace at creation time then the workspace it was linked to is not deleted. |
-> | Microsoft.OperationalInsights/workspaces/generateregistrationcertificate/action | Generates Registration Certificate for the workspace. This Certificate is used to connect Microsoft System Center Operation Manager to the workspace. |
-> | Microsoft.OperationalInsights/workspaces/sharedKeys/action | Retrieves the shared keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. |
+> | Microsoft.OperationalInsights/workspaces/generateRegistrationCertificate/action | Generates Registration Certificate for the workspace. This Certificate is used to connect Microsoft System Center Operation Manager to the workspace. |
+> | Microsoft.OperationalInsights/workspaces/sharedkeys/action | Retrieves the shared keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. |
> | Microsoft.OperationalInsights/workspaces/listKeys/action | Retrieves the list keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. |
+> | Microsoft.OperationalInsights/workspaces/regenerateSharedKey/action | Regenerates the specified workspace shared key |
> | Microsoft.OperationalInsights/workspaces/search/action | Executes a search query |
-> | Microsoft.OperationalInsights/workspaces/purge/action | Delete specified data from workspace |
-> | Microsoft.OperationalInsights/workspaces/regeneratesharedkey/action | Regenerates the specified workspace shared key |
+> | Microsoft.OperationalInsights/workspaces/purge/action | Delete specified data by query from workspace. |
> | microsoft.operationalinsights/workspaces/customfields/action | Extract custom fields. | > | Microsoft.OperationalInsights/workspaces/analytics/query/action | Search using new engine. | > | Microsoft.OperationalInsights/workspaces/analytics/query/schema/read | Get search schema V2. | > | Microsoft.OperationalInsights/workspaces/api/query/action | Search using new engine. | > | Microsoft.OperationalInsights/workspaces/api/query/schema/read | Get search schema V2. | > | Microsoft.OperationalInsights/workspaces/availableservicetiers/read | List of all the available service tiers for workspace. |
-> | Microsoft.OperationalInsights/workspaces/configurationScopes/read | Get Configuration Scope |
-> | Microsoft.OperationalInsights/workspaces/configurationScopes/write | Set Configuration Scope |
-> | Microsoft.OperationalInsights/workspaces/configurationScopes/delete | Delete Configuration Scope |
+> | Microsoft.OperationalInsights/workspaces/configurationscopes/read | Get configuration scope in a workspace. |
+> | Microsoft.OperationalInsights/workspaces/configurationscopes/write | Create configuration scope in a workspace. |
+> | Microsoft.OperationalInsights/workspaces/configurationscopes/delete | Delete configuration scope in a workspace. |
> | microsoft.operationalinsights/workspaces/customfields/read | Get a custom field. | > | microsoft.operationalinsights/workspaces/customfields/write | Create or update a custom field. | > | microsoft.operationalinsights/workspaces/customfields/delete | Delete a custom field. |
+> | Microsoft.OperationalInsights/workspaces/dataexports/read | Get data export. |
+> | Microsoft.OperationalInsights/workspaces/dataexports/write | Create or update specific data export. |
+> | Microsoft.OperationalInsights/workspaces/dataexports/delete | Delete specific Data Export/ |
> | microsoft.operationalinsights/workspaces/dataExports/read | Get specific data export. | > | microsoft.operationalinsights/workspaces/dataExports/write | Create or update data export. | > | microsoft.operationalinsights/workspaces/dataExports/delete | Delete specific data export. |
-> | Microsoft.OperationalInsights/workspaces/datasources/read | Get datasources under a workspace. |
-> | Microsoft.OperationalInsights/workspaces/datasources/write | Create/Update datasources under a workspace. |
-> | Microsoft.OperationalInsights/workspaces/datasources/delete | Delete datasources under a workspace. |
+> | Microsoft.OperationalInsights/workspaces/datasources/read | Get data source under a workspace. |
+> | Microsoft.OperationalInsights/workspaces/datasources/write | Upsert Data Source |
+> | Microsoft.OperationalInsights/workspaces/datasources/delete | Delete data source under a workspace. |
+> | Microsoft.OperationalInsights/workspaces/features/clientGroups/members/read | Get the Client Groups Members of a resource. |
> | microsoft.operationalinsights/workspaces/features/clientgroups/memebers/read | Get Client Group Members of a resource. |
+> | Microsoft.OperationalInsights/workspaces/features/generateMap/read | Get the Service Map of a resource. |
> | microsoft.operationalinsights/workspaces/features/generateMap/read | Get the Service Map of a resource. |
+> | Microsoft.OperationalInsights/workspaces/features/machineGroups/read | Get the Service Map Machine Groups of a resource. |
> | microsoft.operationalinsights/workspaces/features/machineGroups/read | Get the Service Map Machine Groups. |
+> | Microsoft.OperationalInsights/workspaces/features/serverGroups/members/read | Get the Server Groups Members of a resource. |
> | microsoft.operationalinsights/workspaces/features/servergroups/members/read | Get Server Group Members of a resource. | > | Microsoft.OperationalInsights/workspaces/gateways/delete | Removes a gateway configured for the workspace. | > | Microsoft.OperationalInsights/workspaces/intelligencepacks/read | Lists all intelligence packs that are visible for a given workspace and also lists whether the pack is enabled or disabled for that workspace. | > | Microsoft.OperationalInsights/workspaces/intelligencepacks/enable/action | Enables an intelligence pack for a given workspace. | > | Microsoft.OperationalInsights/workspaces/intelligencepacks/disable/action | Disables an intelligence pack for a given workspace. |
-> | Microsoft.OperationalInsights/workspaces/linkedServices/read | Get linked services under given workspace. |
-> | Microsoft.OperationalInsights/workspaces/linkedServices/write | Create/Update linked services under given workspace. |
-> | Microsoft.OperationalInsights/workspaces/linkedServices/delete | Delete linked services under given workspace. |
+> | Microsoft.OperationalInsights/workspaces/linkedservices/read | Get linked services under given workspace. |
+> | Microsoft.OperationalInsights/workspaces/linkedservices/write | Create or update linked services under given workspace. |
+> | Microsoft.OperationalInsights/workspaces/linkedservices/delete | Delete linked services under given workspace. |
> | Microsoft.OperationalInsights/workspaces/listKeys/read | Retrieves the list keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. |
-> | Microsoft.OperationalInsights/workspaces/managementGroups/read | Gets the names and metadata for System Center Operations Manager management groups connected to this workspace. |
+> | Microsoft.OperationalInsights/workspaces/managementgroups/read | Gets the names and metadata for System Center Operations Manager management groups connected to this workspace. |
> | Microsoft.OperationalInsights/workspaces/metricDefinitions/read | Get Metric Definitions under workspace | > | microsoft.operationalinsights/workspaces/networkSecurityPerimeterAssociationProxies/read | Read Network Security Perimeter Association Proxies | > | microsoft.operationalinsights/workspaces/networkSecurityPerimeterAssociationProxies/write | Write Network Security Perimeter Association Proxies |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | microsoft.operationalinsights/workspaces/networkSecurityPerimeterConfigurations/read | Read Network Security Perimeter Configurations | > | microsoft.operationalinsights/workspaces/networkSecurityPerimeterConfigurations/write | Write Network Security Perimeter Configurations | > | microsoft.operationalinsights/workspaces/networkSecurityPerimeterConfigurations/delete | Delete Network Security Perimeter Configurations |
-> | Microsoft.OperationalInsights/workspaces/notificationSettings/read | Get the user's notification settings for the workspace. |
-> | Microsoft.OperationalInsights/workspaces/notificationSettings/write | Set the user's notification settings for the workspace. |
-> | Microsoft.OperationalInsights/workspaces/notificationSettings/delete | Delete the user's notification settings for the workspace. |
+> | Microsoft.OperationalInsights/workspaces/notificationsettings/read | Get the user's notification settings for the workspace. |
+> | Microsoft.OperationalInsights/workspaces/notificationsettings/write | Set the user's notification settings for the workspace. |
+> | Microsoft.OperationalInsights/workspaces/notificationsettings/delete | Delete the user's notification settings for the workspace. |
+> | Microsoft.OperationalInsights/workspaces/operations/read | Gets the status of an OperationalInsights workspace operation. |
> | microsoft.operationalinsights/workspaces/operations/read | Gets the status of an OperationalInsights workspace operation. | > | Microsoft.OperationalInsights/workspaces/providers/Microsoft.Insights/diagnosticSettings/Read | Gets the diagnostic setting for the resource | > | Microsoft.OperationalInsights/workspaces/providers/Microsoft.Insights/diagnosticSettings/Write | Creates or updates the diagnostic setting for the resource |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AGSGrafanaLoginEvents/read | Read data from the AGSGrafanaLoginEvents table | > | Microsoft.OperationalInsights/workspaces/query/AHDSMedTechDiagnosticLogs/read | Read data from the AHDSMedTechDiagnosticLogs table | > | Microsoft.OperationalInsights/workspaces/query/AirflowDagProcessingLogs/read | Read data from the AirflowDagProcessingLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/AKSAudit/read | Read data from the AKSAudit table |
+> | Microsoft.OperationalInsights/workspaces/query/AKSAuditAdmin/read | Read data from the AKSAuditAdmin table |
+> | Microsoft.OperationalInsights/workspaces/query/AKSControlPlane/read | Read data from the AKSControlPlane table |
> | Microsoft.OperationalInsights/workspaces/query/Alert/read | Read data from the Alert table | > | Microsoft.OperationalInsights/workspaces/query/AlertEvidence/read | Read data from the AlertEvidence table | > | Microsoft.OperationalInsights/workspaces/query/AlertHistory/read | Read data from the AlertHistory table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AmlOnlineEndpointEventLog/read | Read data from the AmlOnlineEndpointEventLog table | > | Microsoft.OperationalInsights/workspaces/query/AmlOnlineEndpointTrafficLog/read | Read data from the AmlOnlineEndpointTrafficLog table | > | Microsoft.OperationalInsights/workspaces/query/AmlPipelineEvent/read | Read data from the AmlPipelineEvent table |
+> | Microsoft.OperationalInsights/workspaces/query/AmlRegistryReadEventsLog/read | Read data from the AmlRegistryReadEventsLog table |
+> | Microsoft.OperationalInsights/workspaces/query/AmlRegistryWriteEventsLog/read | Read data from the AmlRegistryWriteEventsLog table |
> | Microsoft.OperationalInsights/workspaces/query/AmlRunEvent/read | Read data from the AmlRunEvent table | > | Microsoft.OperationalInsights/workspaces/query/AmlRunStatusChangedEvent/read | Read data from the AmlRunStatusChangedEvent table | > | Microsoft.OperationalInsights/workspaces/query/AMSKeyDeliveryRequests/read | Read data from the AMSKeyDeliveryRequests table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AppBrowserTimings/read | Read data from the AppBrowserTimings table | > | Microsoft.OperationalInsights/workspaces/query/AppCenterError/read | Read data from the AppCenterError table | > | Microsoft.OperationalInsights/workspaces/query/AppDependencies/read | Read data from the AppDependencies table |
+> | Microsoft.OperationalInsights/workspaces/query/AppEnvSpringAppConsoleLogs/read | Read data from the AppEnvSpringAppConsoleLogs table |
> | Microsoft.OperationalInsights/workspaces/query/AppEvents/read | Read data from the AppEvents table | > | Microsoft.OperationalInsights/workspaces/query/AppExceptions/read | Read data from the AppExceptions table | > | Microsoft.OperationalInsights/workspaces/query/ApplicationInsights/read | Read data from the ApplicationInsights table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/AppTraces/read | Read data from the AppTraces table | > | Microsoft.OperationalInsights/workspaces/query/ASCAuditLogs/read | Read data from the ASCAuditLogs table | > | Microsoft.OperationalInsights/workspaces/query/ASCDeviceEvents/read | Read data from the ASCDeviceEvents table |
+> | Microsoft.OperationalInsights/workspaces/query/ASimAuditEventLogs/read | Read data from the ASimAuditEventLogs table |
> | Microsoft.OperationalInsights/workspaces/query/ASimDnsActivityLogs/read | Read data from the ASimDnsActivityLogs table | > | Microsoft.OperationalInsights/workspaces/query/ASimNetworkSessionLogs/read | Read data from the ASimNetworkSessionLogs table | > | Microsoft.OperationalInsights/workspaces/query/ASimWebSessionLogs/read | Read data from the ASimWebSessionLogs table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/CDBPartitionKeyRUConsumption/read | Read data from the CDBPartitionKeyRUConsumption table | > | Microsoft.OperationalInsights/workspaces/query/CDBPartitionKeyStatistics/read | Read data from the CDBPartitionKeyStatistics table | > | Microsoft.OperationalInsights/workspaces/query/CDBQueryRuntimeStatistics/read | Read data from the CDBQueryRuntimeStatistics table |
+> | Microsoft.OperationalInsights/workspaces/query/ChaosStudioExperimentEventLogs/read | Read data from the ChaosStudioExperimentEventLogs table |
> | Microsoft.OperationalInsights/workspaces/query/CIEventsAudit/read | Read data from the CIEventsAudit table | > | Microsoft.OperationalInsights/workspaces/query/CIEventsOperational/read | Read data from the CIEventsOperational table | > | Microsoft.OperationalInsights/workspaces/query/CloudAppEvents/read | Read data from the CloudAppEvents table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/NetworkMonitoring/read | Read data from the NetworkMonitoring table | > | Microsoft.OperationalInsights/workspaces/query/NetworkSessions/read | Read data from the NetworkSessions table | > | Microsoft.OperationalInsights/workspaces/query/NSPAccessLogs/read | Read data from the NSPAccessLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/NTAIpDetails/read | Read data from the NTAIpDetails table |
+> | Microsoft.OperationalInsights/workspaces/query/NTANetAnalytics/read | Read data from the NTANetAnalytics table |
+> | Microsoft.OperationalInsights/workspaces/query/NTATopologyDetails/read | Read data from the NTATopologyDetails table |
> | Microsoft.OperationalInsights/workspaces/query/NWConnectionMonitorDestinationListenerResult/read | Read data from the NWConnectionMonitorDestinationListenerResult table | > | Microsoft.OperationalInsights/workspaces/query/NWConnectionMonitorDNSResult/read | Read data from the NWConnectionMonitorDNSResult table | > | Microsoft.OperationalInsights/workspaces/query/NWConnectionMonitorPathResult/read | Read data from the NWConnectionMonitorPathResult table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/PurviewDataSensitivityLogs/read | Read data from the PurviewDataSensitivityLogs table | > | Microsoft.OperationalInsights/workspaces/query/PurviewScanStatusLogs/read | Read data from the PurviewScanStatusLogs table | > | Microsoft.OperationalInsights/workspaces/query/PurviewSecurityLogs/read | Read data from the PurviewSecurityLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/REDConnectionEvents/read | Read data from the REDConnectionEvents table |
> | Microsoft.OperationalInsights/workspaces/query/requests/read | Read data from the requests table | > | Microsoft.OperationalInsights/workspaces/query/ResourceManagementPublicAccessLogs/read | Read data from the ResourceManagementPublicAccessLogs table | > | Microsoft.OperationalInsights/workspaces/query/SCCMAssessmentRecommendation/read | Read data from the SCCMAssessmentRecommendation table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/SQLSecurityAuditEvents/read | Read data from the SQLSecurityAuditEvents table | > | Microsoft.OperationalInsights/workspaces/query/SqlVulnerabilityAssessmentResult/read | Read data from the SqlVulnerabilityAssessmentResult table | > | Microsoft.OperationalInsights/workspaces/query/SqlVulnerabilityAssessmentScanStatus/read | Read data from the SqlVulnerabilityAssessmentScanStatus table |
-> | Microsoft.OperationalInsights/workspaces/query/StorageAntimalwareScanResults/read | Read data from the StorageAntimalwareScanResults table |
> | Microsoft.OperationalInsights/workspaces/query/StorageBlobLogs/read | Read data from the StorageBlobLogs table | > | Microsoft.OperationalInsights/workspaces/query/StorageCacheOperationEvents/read | Read data from the StorageCacheOperationEvents table | > | Microsoft.OperationalInsights/workspaces/query/StorageCacheUpgradeEvents/read | Read data from the StorageCacheUpgradeEvents table | > | Microsoft.OperationalInsights/workspaces/query/StorageCacheWarningEvents/read | Read data from the StorageCacheWarningEvents table | > | Microsoft.OperationalInsights/workspaces/query/StorageFileLogs/read | Read data from the StorageFileLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/StorageMalwareScanningResults/read | Read data from the StorageMalwareScanningResults table |
> | Microsoft.OperationalInsights/workspaces/query/StorageMoverCopyLogsFailed/read | Read data from the StorageMoverCopyLogsFailed table | > | Microsoft.OperationalInsights/workspaces/query/StorageMoverCopyLogsTransferred/read | Read data from the StorageMoverCopyLogsTransferred table | > | Microsoft.OperationalInsights/workspaces/query/StorageMoverJobRunLogs/read | Read data from the StorageMoverJobRunLogs table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/WUDOAggregatedStatus/read | Read data from the WUDOAggregatedStatus table | > | Microsoft.OperationalInsights/workspaces/query/WUDOStatus/read | Read data from the WUDOStatus table | > | Microsoft.OperationalInsights/workspaces/query/WVDAgentHealthStatus/read | Read data from the WVDAgentHealthStatus table |
+> | Microsoft.OperationalInsights/workspaces/query/WVDAutoscaleEvaluationPooled/read | Read data from the WVDAutoscaleEvaluationPooled table |
> | Microsoft.OperationalInsights/workspaces/query/WVDCheckpoints/read | Read data from the WVDCheckpoints table | > | Microsoft.OperationalInsights/workspaces/query/WVDConnectionGraphicsDataPreview/read | Read data from the WVDConnectionGraphicsDataPreview table | > | Microsoft.OperationalInsights/workspaces/query/WVDConnectionNetworkData/read | Read data from the WVDConnectionNetworkData table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/WVDHostRegistrations/read | Read data from the WVDHostRegistrations table | > | Microsoft.OperationalInsights/workspaces/query/WVDManagement/read | Read data from the WVDManagement table | > | Microsoft.OperationalInsights/workspaces/query/WVDSessionHostManagement/read | Read data from the WVDSessionHostManagement table |
+> | Microsoft.OperationalInsights/workspaces/restoreLogs/write | Restore data from a table. |
> | microsoft.operationalinsights/workspaces/restoreLogs/write | Restore data from a table. |
+> | Microsoft.OperationalInsights/workspaces/rules/read | Get alert rule. |
> | microsoft.operationalinsights/workspaces/rules/read | Get all alert rules. |
-> | Microsoft.OperationalInsights/workspaces/savedSearches/read | Gets a saved search query |
+> | Microsoft.OperationalInsights/workspaces/savedSearches/read | Gets a saved search query. |
> | Microsoft.OperationalInsights/workspaces/savedSearches/write | Creates a saved search query | > | Microsoft.OperationalInsights/workspaces/savedSearches/delete | Deletes a saved search query |
+> | Microsoft.OperationalInsights/workspaces/savedSearches/results/read | Get saved searches results. Deprecated. |
> | microsoft.operationalinsights/workspaces/savedsearches/results/read | Get saved searches results. Deprecated | > | microsoft.operationalinsights/workspaces/savedsearches/schedules/read | Get scheduled searches. | > | microsoft.operationalinsights/workspaces/savedsearches/schedules/delete | Delete scheduled searches. |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | microsoft.operationalinsights/workspaces/savedsearches/schedules/actions/read | Get scheduled search actions. | > | microsoft.operationalinsights/workspaces/savedsearches/schedules/actions/delete | Delete scheduled search actions. | > | microsoft.operationalinsights/workspaces/savedsearches/schedules/actions/write | Create or update scheduled search actions. |
+> | Microsoft.OperationalInsights/workspaces/schedules/read | Get scheduled saved search. |
+> | Microsoft.OperationalInsights/workspaces/schedules/delete | Delete scheduled saved search. |
+> | Microsoft.OperationalInsights/workspaces/schedules/write | Create or update scheduled saved search. |
+> | Microsoft.OperationalInsights/workspaces/schedules/actions/read | Get Management Configuration action. |
> | Microsoft.OperationalInsights/workspaces/schema/read | Gets the search schema for the workspace. Search schema includes the exposed fields and their types. |
+> | Microsoft.OperationalInsights/workspaces/scopedprivatelinkproxies/read | Get Scoped Private Link Proxy |
+> | Microsoft.OperationalInsights/workspaces/scopedprivatelinkproxies/write | Put Scoped Private Link Proxy |
+> | Microsoft.OperationalInsights/workspaces/scopedprivatelinkproxies/delete | Delete Scoped Private Link Proxy |
> | microsoft.operationalinsights/workspaces/scopedPrivateLinkProxies/read | Get Scoped Private Link Proxy. | > | microsoft.operationalinsights/workspaces/scopedPrivateLinkProxies/write | Put Scoped Private Link Proxy. | > | microsoft.operationalinsights/workspaces/scopedPrivateLinkProxies/delete | Delete Scoped Private Link Proxy. |
+> | Microsoft.OperationalInsights/workspaces/search/read | Get search results. Deprecated. |
> | microsoft.operationalinsights/workspaces/search/read | Get search results. Deprecated. |
+> | Microsoft.OperationalInsights/workspaces/searchJobs/write | Run a search job. |
> | microsoft.operationalinsights/workspaces/searchJobs/write | Run a search job. |
-> | Microsoft.OperationalInsights/workspaces/sharedKeys/read | Retrieves the shared keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. |
+> | Microsoft.OperationalInsights/workspaces/sharedkeys/read | Retrieves the shared keys for the workspace. These keys are used to connect Microsoft Operational Insights agents to the workspace. |
> | Microsoft.OperationalInsights/workspaces/storageinsightconfigs/write | Creates a new storage configuration. These configurations are used to pull data from a location in an existing storage account. | > | Microsoft.OperationalInsights/workspaces/storageinsightconfigs/read | Gets a storage configuration. | > | Microsoft.OperationalInsights/workspaces/storageinsightconfigs/delete | Deletes a storage configuration. This will stop Microsoft Operational Insights from reading data from the storage account. |
+> | Microsoft.OperationalInsights/workspaces/tables/write | Create or update a log analytics table. |
+> | Microsoft.OperationalInsights/workspaces/tables/read | Get a log analytics table. |
+> | Microsoft.OperationalInsights/workspaces/tables/delete | Delete a log analytics table. |
> | microsoft.operationalinsights/workspaces/tables/write | Create or update a log analytics table. | > | microsoft.operationalinsights/workspaces/tables/read | Get a log analytics table. | > | microsoft.operationalinsights/workspaces/tables/delete | Delete a log analytics table. |
+> | Microsoft.OperationalInsights/workspaces/tables/query/read | Run queries over the data of a specific table in the workspace |
> | Microsoft.OperationalInsights/workspaces/upgradetranslationfailures/read | Get Search Upgrade Translation Failure log for the workspace | > | Microsoft.OperationalInsights/workspaces/usages/read | Gets usage data for a workspace including the amount of data read by the workspace. |
+> | Microsoft.OperationalInsights/workspaces/views/read | Get workspace view. |
+> | Microsoft.OperationalInsights/workspaces/views/delete | Delete workspace view. |
+> | Microsoft.OperationalInsights/workspaces/views/write | Create or update workspace view. |
> | microsoft.operationalinsights/workspaces/views/read | Get workspace views. | > | microsoft.operationalinsights/workspaces/views/write | Create or update a workspace view. | > | microsoft.operationalinsights/workspaces/views/delete | Delete a workspace view. |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Action | Description | > | | | > | Microsoft.OperationsManagement/register/action | Register a subscription to a resource provider. |
-> | Microsoft.OperationsManagement/managementAssociations/write | Create a new Management Association |
-> | Microsoft.OperationsManagement/managementAssociations/read | Get Existing Management Association |
-> | Microsoft.OperationsManagement/managementAssociations/delete | Delete existing Management Association |
-> | Microsoft.OperationsManagement/managementConfigurations/write | Create a new Management Configuration |
-> | Microsoft.OperationsManagement/managementConfigurations/read | Get Existing Management Configuration |
-> | Microsoft.OperationsManagement/managementConfigurations/delete | Delete existing Management Configuration |
+> | Microsoft.OperationsManagement/unregister/action | UnRegister a subscription to a resource provider. |
+> | Microsoft.OperationsManagement/managementassociations/write | Create or update Management Association. |
+> | Microsoft.OperationsManagement/managementassociations/read | Get Management Association. |
+> | Microsoft.OperationsManagement/managementassociations/delete | Delete Management Association. |
+> | Microsoft.OperationsManagement/managementconfigurations/write | Create or update management configuration. |
+> | Microsoft.OperationsManagement/managementconfigurations/read | Get management configuration. |
+> | Microsoft.OperationsManagement/managementconfigurations/delete | Delete management configuration. |
> | Microsoft.OperationsManagement/solutions/write | Create new OMS solution | > | Microsoft.OperationsManagement/solutions/read | Get exiting OMS solution | > | Microsoft.OperationsManagement/solutions/delete | Delete existing OMS solution |
Azure service: [Cost Management + Billing](../cost-management-billing/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
+> | Microsoft.Billing/validateAddress/action | |
+> | Microsoft.Billing/register/action | |
+> | Microsoft.Billing/billingAccounts/read | |
+> | Microsoft.Billing/billingAccounts/write | |
+> | Microsoft.Billing/billingAccounts/listInvoiceSectionsWithCreateSubscriptionPermission/action | |
+> | Microsoft.Billing/billingAccounts/confirmTransition/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/action | |
+> | Microsoft.Billing/billingAccounts/listRecommendations/action | |
+> | Microsoft.Billing/billingAccounts/addDepartment/write | |
+> | Microsoft.Billing/billingAccounts/addEnrollmentAccount/write | |
+> | Microsoft.Billing/billingAccounts/agreements/read | |
+> | Microsoft.Billing/billingAccounts/associatedTenants/read | Lists the tenants that can collaborate with the billing account on commerce activities like viewing and downloading invoices, managing payments, making purchases, and managing licenses. |
+> | Microsoft.Billing/billingAccounts/associatedTenants/write | Create or update an associated tenant for the billing account. |
+> | Microsoft.Billing/billingAccounts/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/purchaseProduct/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/priceProduct/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/billingRoleDefinitions/read | Gets the definition for a role on a billing profile. The operation is supported for billing accounts with agreement type Microsoft Partner Agreement or Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/billingSubscriptions/read | Get a billing subscription by billing profile ID and billing subscription ID. This operation is supported only for billing accounts of type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/checkAccess/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/customers/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/customers/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/customers/checkAccess/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/customers/resolveBillingRoleAssignments/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/departments/read | Lists the departments that a user has access to. The operation is supported only for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/departments/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/departments/billingRoleDefinitions/read | Gets the definition for a role on a department. The operation is supported for billing profiles with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/departments/billingSubscriptions/read | List billing subscriptions by billing profile ID and department name. This operation is supported only for billing accounts of type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/departments/enrollmentAccounts/read | Get list of enrollment accounts using billing profile ID and department ID |
+> | Microsoft.Billing/billingAccounts/billingProfiles/enrollmentAccounts/read | Lists the enrollment accounts for a specific billing account and a billing profile belonging to it. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/enrollmentAccounts/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/enrollmentAccounts/billingSubscriptions/read | List billing subscriptions by billing profile ID and enrollment account name. This operation is supported only for billing accounts of type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoices/download/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoices/pricesheet/download/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoices/validateRefundEligibility/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/read | Lists the invoice sections that a user has access to. The operation is supported only for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/write | Creates or updates an invoice section. The operation is supported only for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/billingRoleDefinitions/read | Gets the definition for a role on an invoice section. The operation is supported only for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/billingSubscriptions/transfer/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/billingSubscriptions/move/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/billingSubscriptions/validateMoveEligibility/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/billingSubscriptions/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/billingSubscriptions/read | Lists the subscriptions that are billed to an invoice section. The operation is supported only for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/checkAccess/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/products/transfer/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/products/move/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/products/validateMoveEligibility/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/resolveBillingRoleAssignments/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/validateDeleteEligibility/write | Validates if the invoice section can be deleted. The operation is supported for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/invoiceSections/validateDeleteInvoiceSectionEligibility/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/notificationContacts/read | Lists the NotificationContacts for the given billing profile. The operation is supported only for billing profiles with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/policies/read | Lists the policies for a billing profile. This operation is supported only for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/policies/write | Updates the policies for a billing profile. This operation is supported only for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingProfiles/pricesheet/download/action | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/products/read | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/resolveBillingRoleAssignments/write | |
+> | Microsoft.Billing/billingAccounts/billingProfiles/validateDeleteBillingProfileEligibility/write | |
+> | Microsoft.Billing/billingAccounts/billingRoleAssignments/write | |
+> | Microsoft.Billing/billingAccounts/billingRoleDefinitions/read | Gets the definition for a role on a billing account. The operation is supported for billing accounts with agreement type Microsoft Partner Agreement, Microsoft Customer Agreement or Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/billingSubscriptionAliases/read | |
+> | Microsoft.Billing/billingAccounts/billingSubscriptionAliases/write | |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/read | Lists the subscriptions for a billing account. The operation is supported for billing accounts with agreement type Microsoft Customer Agreement, Microsoft Partner Agreement or Enterprise Agreement. |
> | Microsoft.Billing/billingAccounts/billingSubscriptions/downloadDocuments/action | Download invoice using download link from list |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/move/action | |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/validateMoveEligibility/action | |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/write | Updates the properties of a billing subscription. Cost center can only be updated for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/cancel/write | |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/merge/write | |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/move/write | Moves a subscription's charges to a new invoice section. The new invoice section must belong to the same billing profile as the existing invoice section. This operation is supported for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/split/write | |
+> | Microsoft.Billing/billingAccounts/billingSubscriptions/validateMoveEligibility/write | Validates if a subscription's charges can be moved to a new invoice section. This operation is supported for billing accounts with agreement type Microsoft Customer Agreement. |
+> | Microsoft.Billing/billingAccounts/checkAccess/write | |
+> | Microsoft.Billing/billingAccounts/customers/read | |
+> | Microsoft.Billing/billingAccounts/customers/initiateTransfer/action | |
+> | Microsoft.Billing/billingAccounts/customers/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/customers/billingRoleDefinitions/read | Gets the definition for a role on a customer. The operation is supported only for billing accounts with agreement type Microsoft Partner Agreement. |
+> | Microsoft.Billing/billingAccounts/customers/billingSubscriptions/read | Lists the subscriptions for a customer. The operation is supported only for billing accounts with agreement type Microsoft Partner Agreement. |
+> | Microsoft.Billing/billingAccounts/customers/checkAccess/write | |
+> | Microsoft.Billing/billingAccounts/customers/policies/read | Lists the policies for a customer. This operation is supported only for billing accounts with agreement type Microsoft Partner Agreement. |
+> | Microsoft.Billing/billingAccounts/customers/policies/write | Updates the policies for a customer. This operation is supported only for billing accounts with agreement type Microsoft Partner Agreement. |
+> | Microsoft.Billing/billingAccounts/customers/resolveBillingRoleAssignments/write | |
+> | Microsoft.Billing/billingAccounts/customers/transfers/write | |
+> | Microsoft.Billing/billingAccounts/customers/transfers/read | |
+> | Microsoft.Billing/billingAccounts/departments/read | Lists the departments that a user has access to. The operation is supported only for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/departments/write | |
+> | Microsoft.Billing/billingAccounts/departments/addEnrollmentAccount/write | |
+> | Microsoft.Billing/billingAccounts/departments/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/departments/billingRoleAssignments/write | |
+> | Microsoft.Billing/billingAccounts/departments/billingRoleDefinitions/read | Gets the definition for a role on a department. The operation is supported for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/departments/billingSubscriptions/read | Lists the subscriptions for a department. The operation is supported for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/departments/checkAccess/write | |
+> | Microsoft.Billing/billingAccounts/departments/enrollmentAccounts/read | Lists the enrollment accounts for a department. The operation is supported only for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/departments/enrollmentAccounts/write | |
+> | Microsoft.Billing/billingAccounts/departments/enrollmentAccounts/remove/write | |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/read | Lists the enrollment accounts for a billing account. The operation is supported only for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/write | |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/activate/write | |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/activationStatus/read | |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/billingPermissions/read | |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/billingRoleAssignments/write | |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/billingRoleDefinitions/read | Gets the definition for a role on a enrollment account. The operation is supported for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/billingSubscriptions/write | |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/billingSubscriptions/read | Lists the subscriptions for an enrollment account. The operation is supported for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/checkAccess/write | |
+> | Microsoft.Billing/billingAccounts/enrollmentAccounts/transferBillingSubscriptions/write | |
+> | Microsoft.Billing/billingAccounts/invoices/download/action | |
+> | Microsoft.Billing/billingAccounts/invoices/pricesheet/download/action | |
+> | Microsoft.Billing/billingAccounts/invoiceSections/write | |
+> | Microsoft.Billing/billingAccounts/invoiceSections/elevate/action | |
+> | Microsoft.Billing/billingAccounts/invoiceSections/read | |
+> | Microsoft.Billing/billingAccounts/listBillingProfilesWithViewPricesheetPermissions/read | |
+> | Microsoft.Billing/billingAccounts/notificationContacts/read | Lists the NotificationContacts for the given billing account. The operation is supported only for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/notificationContacts/write | Update a notification contact by ID. The operation is supported only for billing accounts with agreement type Enterprise Agreement. |
+> | Microsoft.Billing/billingAccounts/operationResults/read | |
+> | Microsoft.Billing/billingAccounts/policies/read | Get the policies for a billing account of Enterprise Agreement type. |
+> | Microsoft.Billing/billingAccounts/policies/write | Update the policies for a billing account of Enterprise Agreement type. |
+> | Microsoft.Billing/billingAccounts/products/read | |
+> | Microsoft.Billing/billingAccounts/products/move/action | |
+> | Microsoft.Billing/billingAccounts/products/validateMoveEligibility/action | |
+> | Microsoft.Billing/billingAccounts/purchaseProduct/write | |
+> | Microsoft.Billing/billingAccounts/resolveBillingRoleAssignments/write | |
> | Microsoft.Billing/billingPeriods/read | | > | Microsoft.Billing/billingProperty/read | | > | Microsoft.Billing/billingProperty/write | |
+> | Microsoft.Billing/departments/read | |
+> | Microsoft.Billing/enrollmentAccounts/read | |
> | Microsoft.Billing/invoices/read | |
+> | Microsoft.Billing/invoices/download/action | Download invoice using download link from list |
+> | Microsoft.Billing/operations/read | List of operations supported by provider. |
+> | Microsoft.Billing/validateAddress/write | |
### Microsoft.Blueprint
Azure service: Microsoft.DataProtection
> | Action | Description | > | | | > | Microsoft.DataProtection/backupVaults/write | Create BackupVault operation creates an Azure resource of type 'Backup Vault' |
+> | Microsoft.DataProtection/backupVaults/write | Update BackupVault operation updates an Azure resource of type 'Backup Vault' |
+> | Microsoft.DataProtection/backupVaults/read | The Get Backup Vault operation gets an object representing the Azure resource of type 'Backup Vault' |
> | Microsoft.DataProtection/backupVaults/read | Gets list of Backup Vaults in a Subscription | > | Microsoft.DataProtection/backupVaults/read | Gets list of Backup Vaults in a Resource Group |
+> | Microsoft.DataProtection/backupVaults/delete | The Delete Vault operation deletes the specified Azure resource of type 'Backup Vault' |
> | Microsoft.DataProtection/backupVaults/validateForBackup/action | Validates for backup of Backup Instance | > | Microsoft.DataProtection/backupVaults/backupInstances/write | Creates a Backup Instance | > | Microsoft.DataProtection/backupVaults/backupInstances/delete | Deletes the Backup Instance |
Azure service: Microsoft.DataProtection
> | Microsoft.DataProtection/locations/checkFeatureSupport/action | Validates if a feature is supported | > | Microsoft.DataProtection/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. | > | Microsoft.DataProtection/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. |
-> | Microsoft.DataProtection/providers/operations/read | Operation returns the list of Operations for a Resource Provider |
+> | Microsoft.DataProtection/operations/read | Operation returns the list of Operations for a Resource Provider |
> | Microsoft.DataProtection/subscriptions/providers/resourceGuards/read | Gets list of ResourceGuards in a Subscription | > | Microsoft.DataProtection/subscriptions/resourceGroups/providers/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. | > | Microsoft.DataProtection/subscriptions/resourceGroups/providers/resourceGuards/write | Create ResourceGuard operation creates an Azure resource of type 'ResourceGuard' |
Azure service: [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.m
> | Microsoft.Kubernetes/connectedClusters/coordination.k8s.io/leases/read | Reads leases | > | Microsoft.Kubernetes/connectedClusters/coordination.k8s.io/leases/write | Writes leases | > | Microsoft.Kubernetes/connectedClusters/coordination.k8s.io/leases/delete | Deletes leases |
+> | Microsoft.Kubernetes/connectedClusters/discovery.k8s.io/endpointslices/read | Reads endpointslices |
+> | Microsoft.Kubernetes/connectedClusters/discovery.k8s.io/endpointslices/write | Writes endpointslices |
+> | Microsoft.Kubernetes/connectedClusters/discovery.k8s.io/endpointslices/delete | Deletes endpointslices |
> | Microsoft.Kubernetes/connectedClusters/endpoints/read | Reads endpoints | > | Microsoft.Kubernetes/connectedClusters/endpoints/write | Writes endpoints | > | Microsoft.Kubernetes/connectedClusters/endpoints/delete | Deletes endpoints |
Azure service: [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.m
> | Microsoft.Kubernetes/connectedClusters/extensions/replicasets/read | Reads replicasets | > | Microsoft.Kubernetes/connectedClusters/extensions/replicasets/write | Writes replicasets | > | Microsoft.Kubernetes/connectedClusters/extensions/replicasets/delete | Deletes replicasets |
+> | Microsoft.Kubernetes/connectedClusters/flowcontrol.apiserver.k8s.io/flowschemas/read | Reads flowschemas |
+> | Microsoft.Kubernetes/connectedClusters/flowcontrol.apiserver.k8s.io/flowschemas/write | Writes flowschemas |
+> | Microsoft.Kubernetes/connectedClusters/flowcontrol.apiserver.k8s.io/flowschemas/delete | Deletes flowschemas |
+> | Microsoft.Kubernetes/connectedClusters/flowcontrol.apiserver.k8s.io/prioritylevelconfigurations/read | Reads prioritylevelconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/flowcontrol.apiserver.k8s.io/prioritylevelconfigurations/write | Writes prioritylevelconfigurations |
+> | Microsoft.Kubernetes/connectedClusters/flowcontrol.apiserver.k8s.io/prioritylevelconfigurations/delete | Deletes prioritylevelconfigurations |
> | Microsoft.Kubernetes/connectedClusters/groups/impersonate/action | Impersonate groups | > | Microsoft.Kubernetes/connectedClusters/healthz/read | Reads healthz | > | Microsoft.Kubernetes/connectedClusters/healthz/autoregister-completion/read | Reads autoregister-completion |
Azure service: [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.m
> | Microsoft.Kubernetes/connectedClusters/namespaces/read | Reads namespaces | > | Microsoft.Kubernetes/connectedClusters/namespaces/write | Writes namespaces | > | Microsoft.Kubernetes/connectedClusters/namespaces/delete | Deletes namespaces |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingressclasses/read | Reads ingressclasses |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingressclasses/write | Writes ingressclasses |
+> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingressclasses/delete | Deletes ingressclasses |
> | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingresses/read | Reads ingresses | > | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingresses/write | Writes ingresses | > | Microsoft.Kubernetes/connectedClusters/networking.k8s.io/ingresses/delete | Deletes ingresses |
Azure service: [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.m
> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csinodes/read | Reads csinodes | > | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csinodes/write | Writes csinodes | > | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csinodes/delete | Deletes csinodes |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csistoragecapacities/read | Reads csistoragecapacities |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csistoragecapacities/write | Writes csistoragecapacities |
+> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/csistoragecapacities/delete | Deletes csistoragecapacities |
> | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/storageclasses/read | Reads storageclasses | > | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/storageclasses/write | Writes storageclasses | > | Microsoft.Kubernetes/connectedClusters/storage.k8s.io/storageclasses/delete | Deletes storageclasses |
Azure service: [Azure Virtual Desktop](../virtual-desktop/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
+> | Microsoft.DesktopVirtualization/unregister/action | Action on unregister |
> | Microsoft.DesktopVirtualization/register/action | Register subscription | > | Microsoft.DesktopVirtualization/applicationgroups/read | Read applicationgroups | > | Microsoft.DesktopVirtualization/applicationgroups/write | Write applicationgroups |
Azure service: [Azure Virtual Desktop](../virtual-desktop/index.yml)
> | Microsoft.DesktopVirtualization/applicationgroups/desktops/write | Write applicationgroups/desktops | > | Microsoft.DesktopVirtualization/applicationgroups/desktops/delete | Delete applicationgroups/desktops | > | Microsoft.DesktopVirtualization/applicationgroups/externaluserassignments/read | |
+> | Microsoft.DesktopVirtualization/applicationgroups/externaluserassignments/write | |
> | Microsoft.DesktopVirtualization/applicationgroups/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting | > | Microsoft.DesktopVirtualization/applicationgroups/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting | > | Microsoft.DesktopVirtualization/applicationgroups/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs |
Azure service: [Azure Virtual Desktop](../virtual-desktop/index.yml)
> | Microsoft.DesktopVirtualization/hostpools/read | Read hostpools | > | Microsoft.DesktopVirtualization/hostpools/write | Write hostpools | > | Microsoft.DesktopVirtualization/hostpools/delete | Delete hostpools |
+> | Microsoft.DesktopVirtualization/hostpools/controlUpdate/action | |
+> | Microsoft.DesktopVirtualization/hostpools/update/action | Action on update |
> | Microsoft.DesktopVirtualization/hostpools/retrieveRegistrationToken/action | Retrieve registration token for host pool | > | Microsoft.DesktopVirtualization/hostpools/move/action | Move a hostpools to another resource group | > | Microsoft.DesktopVirtualization/hostpools/expandmsiximage/action | Expand an expandmsiximage to see MSIX Packages present |
Azure service: [Azure Virtual Desktop](../virtual-desktop/index.yml)
> | Microsoft.DesktopVirtualization/hostpools/privateendpointconnectionproxies/delete | Delete hostpools/privateendpointconnectionproxies | > | Microsoft.DesktopVirtualization/hostpools/privateendpointconnectionproxies/validate/action | Validates the private endpoint connection proxy | > | Microsoft.DesktopVirtualization/hostpools/privateendpointconnectionproxies/operationresults/read | Gets operation result on private endpoint connection proxy |
+> | Microsoft.DesktopVirtualization/hostpools/privateendpointconnections/read | Read hostpools/privateendpointconnections |
+> | Microsoft.DesktopVirtualization/hostpools/privateendpointconnections/write | Write hostpools/privateendpointconnections |
+> | Microsoft.DesktopVirtualization/hostpools/privateendpointconnections/delete | Delete hostpools/privateendpointconnections |
+> | Microsoft.DesktopVirtualization/hostpools/privatelinkresources/read | Read privatelinkresources |
> | Microsoft.DesktopVirtualization/hostpools/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting | > | Microsoft.DesktopVirtualization/hostpools/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting | > | Microsoft.DesktopVirtualization/hostpools/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs |
+> | Microsoft.DesktopVirtualization/hostpools/scalingplans/read | Read scalingplans |
> | Microsoft.DesktopVirtualization/hostpools/sessionhostconfigurations/read | Read hostpools/sessionhostconfigurations | > | Microsoft.DesktopVirtualization/hostpools/sessionhostconfigurations/write | Write hostpools/sessionhostconfigurations | > | Microsoft.DesktopVirtualization/hostpools/sessionhostconfigurations/delete | Delete hostpools/sessionhostconfigurations |
Azure service: [Azure Virtual Desktop](../virtual-desktop/index.yml)
> | Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/delete | Delete hostpools/sessionhosts/usersessions | > | Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/disconnect/action | Disconnects the user session form session host | > | Microsoft.DesktopVirtualization/hostpools/sessionhosts/usersessions/sendMessage/action | Send message to user session |
+> | Microsoft.DesktopVirtualization/hostpools/updateDetails/read | Read updateDetails |
+> | Microsoft.DesktopVirtualization/hostpools/updateOperationResults/read | Read updateOperationResults |
+> | Microsoft.DesktopVirtualization/hostpools/usersessions/read | Read usersessions |
+> | Microsoft.DesktopVirtualization/operations/read | Read operations |
+> | Microsoft.DesktopVirtualization/resourceTypes/read | Read resourceTypes |
> | Microsoft.DesktopVirtualization/scalingplans/read | Read scalingplans | > | Microsoft.DesktopVirtualization/scalingplans/write | Write scalingplans | > | Microsoft.DesktopVirtualization/scalingplans/delete | Delete scalingplans | > | Microsoft.DesktopVirtualization/scalingplans/move/action | Move scalingplans to another ResourceGroup or Subscription |
+> | Microsoft.DesktopVirtualization/scalingplans/personalSchedules/read | Read scalingplans/personalSchedules |
+> | Microsoft.DesktopVirtualization/scalingplans/personalSchedules/write | Write scalingplans/personalSchedules |
+> | Microsoft.DesktopVirtualization/scalingplans/personalSchedules/delete | Delete scalingplans/personalSchedules |
> | Microsoft.DesktopVirtualization/scalingplans/pooledSchedules/read | Read scalingplans/pooledSchedules | > | Microsoft.DesktopVirtualization/scalingplans/pooledSchedules/write | Write scalingplans/pooledSchedules | > | Microsoft.DesktopVirtualization/scalingplans/pooledSchedules/delete | Delete scalingplans/pooledSchedules |
Azure service: [Azure Virtual Desktop](../virtual-desktop/index.yml)
> | Microsoft.DesktopVirtualization/workspaces/privateendpointconnectionproxies/delete | Delete workspaces/privateendpointconnectionproxies | > | Microsoft.DesktopVirtualization/workspaces/privateendpointconnectionproxies/validate/action | Validates the private endpoint connection proxy | > | Microsoft.DesktopVirtualization/workspaces/privateendpointconnectionproxies/operationresults/read | Gets operation result on private endpoint connection proxy |
+> | Microsoft.DesktopVirtualization/workspaces/privateendpointconnections/read | Read workspaces/privateendpointconnections |
+> | Microsoft.DesktopVirtualization/workspaces/privateendpointconnections/write | Write workspaces/privateendpointconnections |
+> | Microsoft.DesktopVirtualization/workspaces/privateendpointconnections/delete | Delete workspaces/privateendpointconnections |
+> | Microsoft.DesktopVirtualization/workspaces/privatelinkresources/read | Read privatelinkresources |
> | Microsoft.DesktopVirtualization/workspaces/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting | > | Microsoft.DesktopVirtualization/workspaces/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting | > | Microsoft.DesktopVirtualization/workspaces/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs |
Azure service: [Azure Digital Twins](../digital-twins/index.yml)
> | Microsoft.DigitalTwins/digitalTwinsInstances/endpoints/delete | Delete any Endpoint of a Digital Twins resource | > | Microsoft.DigitalTwins/digitalTwinsInstances/endpoints/read | Read any Endpoint of a Digital Twins resource | > | Microsoft.DigitalTwins/digitalTwinsInstances/endpoints/write | Create or Update any Endpoint of a Digital Twins resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/ingressEndpoints/delete | Delete any Ingress Endpoint of a Digital Twins resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/ingressEndpoints/read | Read any Ingress Endpoint of a Digital Twins resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/ingressEndpoints/write | Create or Update any Ingress Endpoint of a Digital Twins resource |
> | Microsoft.DigitalTwins/digitalTwinsInstances/logDefinitions/read | Gets the log settings for the resource's Azure Monitor | > | Microsoft.DigitalTwins/digitalTwinsInstances/metricDefinitions/read | Gets the metric settings for the resource's Azure Monitor |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/networkSecurityPerimeterAssociationProxies/read | Read NetworkSecurityPerimeterAssociationProxies resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/networkSecurityPerimeterAssociationProxies/write | Write NetworkSecurityPerimeterAssociationProxies resource |
+> | Microsoft.DigitalTwins/digitalTwinsInstances/networkSecurityPerimeterAssociationProxies/delete | Delete NetworkSecurityPerimeterAssociationProxies resource |
> | Microsoft.DigitalTwins/digitalTwinsInstances/operationsResults/read | Read any Operation Result | > | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnectionProxies/validate/action | Validate PrivateEndpointConnectionProxies resource | > | Microsoft.DigitalTwins/digitalTwinsInstances/privateEndpointConnectionProxies/read | Read PrivateEndpointConnectionProxies resource |
Azure service: [Azure Load Testing](../load-testing/index.yml)
> | Microsoft.LoadTestService/loadtests/deleteTest/action | Delete Load Tests | > | Microsoft.LoadTestService/loadtests/readTest/action | Read Load Tests |
-### Microsoft.MobileNetwork
-
-Azure service: [Mobile networks](../private-5g-core/index.yml)
-
-> [!div class="mx-tableFixed"]
-> | Action | Description |
-> | | |
-> | Microsoft.MobileNetwork/register/action | Register the subscription for Microsoft.MobileNetwork |
-> | Microsoft.MobileNetwork/unregister/action | Unregister the subscription for Microsoft.MobileNetwork |
-> | Microsoft.MobileNetwork/Locations/OperationStatuses/read | read OperationStatuses |
-> | Microsoft.MobileNetwork/Locations/OperationStatuses/write | write OperationStatuses |
-> | Microsoft.MobileNetwork/mobileNetworks/read | Gets information about the specified mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/write | Creates or updates a mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/delete | Deletes the specified mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/write | Updates mobile network tags. |
-> | Microsoft.MobileNetwork/mobileNetworks/read | Lists all the mobile networks in a subscription. |
-> | Microsoft.MobileNetwork/mobileNetworks/read | Lists all the mobile networks in a resource group. |
-> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/read | Gets information about the specified data network. |
-> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/write | Creates or updates a data network. Must be created in the same location as its parent mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/delete | Deletes the specified data network. |
-> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/write | Updates data network tags. |
-> | Microsoft.MobileNetwork/mobileNetworks/dataNetworks/read | Lists all data networks in the mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/services/read | Gets information about the specified service. |
-> | Microsoft.MobileNetwork/mobileNetworks/services/write | Creates or updates a service. Must be created in the same location as its parent mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/services/delete | Deletes the specified service. |
-> | Microsoft.MobileNetwork/mobileNetworks/services/write | Updates service tags. |
-> | Microsoft.MobileNetwork/mobileNetworks/services/read | Gets all the services in a mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/read | Gets information about the specified SIM policy. |
-> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/write | Creates or updates a SIM policy. Must be created in the same location as its parent mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/delete | Deletes the specified SIM policy. |
-> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/write | Updates SIM policy tags. |
-> | Microsoft.MobileNetwork/mobileNetworks/simPolicies/read | Gets all the SIM policies in a mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/sites/read | Gets information about the specified mobile network site. |
-> | Microsoft.MobileNetwork/mobileNetworks/sites/write | Creates or updates a mobile network site. Must be created in the same location as its parent mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/sites/delete | Deletes the specified mobile network site. This will also delete any network functions that are a part of this site. |
-> | Microsoft.MobileNetwork/mobileNetworks/sites/write | Updates site tags. |
-> | Microsoft.MobileNetwork/mobileNetworks/sites/read | Lists all sites in the mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/slices/read | Gets information about the specified network slice. |
-> | Microsoft.MobileNetwork/mobileNetworks/slices/write | Creates or updates a network slice. Must be created in the same location as its parent mobile network. |
-> | Microsoft.MobileNetwork/mobileNetworks/slices/delete | Deletes the specified network slice. |
-> | Microsoft.MobileNetwork/mobileNetworks/slices/write | Updates slice tags. |
-> | Microsoft.MobileNetwork/mobileNetworks/slices/read | Lists all slices in the mobile network. |
-> | Microsoft.MobileNetwork/Operations/read | read Operations |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/read | Gets information about the specified packet core control plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/write | Creates or updates a packet core control plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/delete | Deletes the specified packet core control plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/write | Updates packet core control planes tags. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/read | Lists all the packet core control planes in a subscription. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/read | Lists all the packet core control planes in a resource group. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/rollback/action | Roll back the specified packet core control plane to the previous version, "rollbackVersion". Multiple consecutive rollbacks are not possible. This action may cause a service outage. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/reinstall/action | Reinstall the specified packet core control plane. This action will remove any transaction state from the packet core to return it to a known state. This action will cause a service outage. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/collectDiagnosticsPackage/action | Collect a diagnostics package for the specified packet core control plane. This action will upload the diagnostics to a storage account. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/read | Gets information about the specified packet core data plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/write | Creates or updates a packet core data plane. Must be created in the same location as its parent packet core control plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/delete | Deletes the specified packet core data plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/write | Updates packet core data planes tags. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/read | Lists all the packet core data planes associated with a packet core control plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/read | Gets information about the specified attached data network. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/write | Creates or updates an attached data network. Must be created in the same location as its parent packet core data plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/delete | Deletes the specified attached data network. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/write | Updates an attached data network tags. |
-> | Microsoft.MobileNetwork/packetCoreControlPlanes/packetCoreDataPlanes/attachedDataNetworks/read | Gets all the attached data networks associated with a packet core data plane. |
-> | Microsoft.MobileNetwork/packetCoreControlPlaneVersions/read | Gets information about the specified packet core control plane version. |
-> | Microsoft.MobileNetwork/packetCoreControlPlaneVersions/read | Lists all supported packet core control planes versions. |
-> | Microsoft.MobileNetwork/radioAccessNetworks/read | Gets information about the specified RAN. |
-> | Microsoft.MobileNetwork/radioAccessNetworks/write | Creates or updates a RAN. |
-> | Microsoft.MobileNetwork/radioAccessNetworks/delete | Deletes the specified RAN. |
-> | Microsoft.MobileNetwork/radioAccessNetworks/write | Updates RAN tags. |
-> | Microsoft.MobileNetwork/radioAccessNetworks/read | Gets all the RANs in a subscription. |
-> | Microsoft.MobileNetwork/radioAccessNetworks/read | Gets all the RANs in a resource group. |
-> | Microsoft.MobileNetwork/simGroups/uploadSims/action | Bulk upload SIMs to a SIM group. |
-> | Microsoft.MobileNetwork/simGroups/deleteSims/action | Bulk delete SIMs from a SIM group. |
-> | Microsoft.MobileNetwork/simGroups/uploadEncryptedSims/action | Bulk upload SIMs in encrypted form to a SIM group. The SIM credentials must be encrypted. |
-> | Microsoft.MobileNetwork/simGroups/read | Gets information about the specified SIM group. |
-> | Microsoft.MobileNetwork/simGroups/write | Creates or updates a SIM group. |
-> | Microsoft.MobileNetwork/simGroups/delete | Deletes the specified SIM group. |
-> | Microsoft.MobileNetwork/simGroups/write | Updates SIM group tags. |
-> | Microsoft.MobileNetwork/simGroups/read | Gets all the SIM groups in a subscription. |
-> | Microsoft.MobileNetwork/simGroups/read | Gets all the SIM groups in a resource group. |
-> | Microsoft.MobileNetwork/simGroups/sims/read | Gets information about the specified SIM. |
-> | Microsoft.MobileNetwork/simGroups/sims/write | Creates or updates a SIM. |
-> | Microsoft.MobileNetwork/simGroups/sims/delete | Deletes the specified SIM. |
-> | Microsoft.MobileNetwork/simGroups/sims/read | Gets all the SIMs in a SIM group. |
-> | Microsoft.MobileNetwork/sims/read | Gets information about the specified SIM. |
-> | Microsoft.MobileNetwork/sims/write | Creates or updates a SIM. |
-> | Microsoft.MobileNetwork/sims/delete | Deletes the specified SIM. |
-> | Microsoft.MobileNetwork/sims/write | Updates SIM tags. |
-> | Microsoft.MobileNetwork/sims/read | Gets all the SIMs in a subscription. |
-> | Microsoft.MobileNetwork/sims/read | Gets all the SIMs in a resource group. |
- ### Microsoft.ServicesHub Azure service: [Services Hub](/services-hub/)
route-server Multiregion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/multiregion.md
Title: Multi-region designs with Azure Route Server
-description: Learn about how Azure Route Server enables multi-region designs.
+description: Learn how Azure Route Server enables multi-region designs.
Previously updated : 02/03/2022 Last updated : 03/31/2023 -+ # Multi-region networking with Azure Route Server
-Applications that have demanding requirements around high availability or disaster recovery often need to be deployed in more than one Azure region, where spoke VNets in multiple regions need to communicate between each other. A possibility to achieve this communication pattern is peering to each other all spokes that need to communicate, but those flows would bypass any central NVAs in the hubs, such as firewalls. Another possibility is using User Defined Routes (UDRs) in the subnets where the hub NVAs are deployed, but that can be difficult to maintain. Azure Route Server offers an alternative which is very dynamic and adapts to topology changes without manual intervention.
+Applications with demanding high availability or disaster recovery requirements often require to be deployed in more than one Azure regions. In such cases, spoke virtual networks (VNets) in different regions need to communicate with each other. One way to enable this communication is by peering all the required spoke VNets to each other. However, this approach would bypass any central network virtual appliances (NVAs) such as firewalls in the hubs. An alternative is to use user-defined routes (UDRs) in the subnets where hub NVAs are deployed, but maintaining UDRs can be challenging. Azure Route Server offers a dynamic alternative that adapts to topology changes automatically, without requiring manual intervention.
## Topology The following diagram shows a dual-region architecture, where a hub and spoke topology exists in each region, and the hub VNets are peered to each other via VNet global peering:
-Each NVA learns the prefixes from the local hub and spokes from its Azure Route Server, and will communicate it to the NVA in the other region via BGP. This communication between the NVAs should be established over an encapsulation technology such as IPsec or Virtual eXtensible LAN (VXLAN), since otherwise routing loops can occur in the network.
+The NVA in each region learns the prefixes of the local hub and spoke VNets through the Azure Route Server and shares them with the NVA in the other region using BGP. To avoid routing loops, it's crucial to establish this communication between the NVAs using an encapsulation technology such as IPsec or Virtual eXtensible LAN (VXLAN).
-The spokes need to be peered with the hub VNet with the setting "Use Remote Gateways", so that Azure Route Server advertises their prefixes to the local NVAs, and it injects learnt routes back into the spokes.
+To enable Azure Route Server to advertise the prefixes of the spoke VNets to the local NVAs and inject the learned routes back into the spoke VNets, it's essential to use *Use remote virtual network's gateway or Route Server* setting for peering between the spoke VNets and the hub VNet.
-The NVAs will advertise to their local Route Server the routes that they learn from the remote region, and Route Server will configure these routes in the local spokes, hence attracting traffic. If there are multiple NVAs in the same region (Route Server supports up to 8 BGP adjacencies), AS path prepending can be used to make one of the NVAs preferred to the others, hence defining an active/standby NVA topology.
+The NVAs advertises the routes they learn from the remote region to their local Route Server, which will then configure these routes in the local spoke VNets, attracting traffic accordingly. In cases where multiple NVAs exist in the same region (Route Server supports up to eight BGP peers), AS path prepending can be utilized to make one of the NVAs preferred over the others, effectively establishing an active/standby NVA topology.
-Note that when an NVA advertises routes coming from a Route Server in a remote region to its local Route Server, it should remove the Autonomous System Number (ASN) 65515 from the AS path of the routes. This is known in certain BGP platforms as "AS override" or "AS-path rewrite". Otherwise, the local Route Server will not learn those routes, as the BGP loop prevention mechanism forbids learning routes that already contain the local ASN.
+> [!IMPORTANT]
+> To ensure that the local Route Server can learn the routes advertised by the NVA from the remote region, the NVA must remove the autonomous system number (ASN) 65515 from the AS path of the routes. This technique is sometimes referred to as "AS override" or "AS-path rewrite" in certain BGP platforms. Otherwise, the BGP loop prevention mechanism will prevent the local Route Server from learning those routes since it prohibits the learning of routes that already contain the local ASN.
## ExpressRoute
-This design can be combined with ExpressRoute or VPN gateways. The following diagram shows a topology including an ExpressRoute gateway connected to an on-premises network in one of the Azure regions. In this case, an overlay network over the ExpressRoute circuit will help to simplify the network, so that on-premises prefixes will only appear in Azure as advertised by the NVA (and not from the ExpressRoute gateway).
+The multi-region design can be combined with ExpressRoute or VPN gateways. The following diagram shows a topology including an ExpressRoute gateway connected to an on-premises network in one of the Azure regions. In this case, an overlay network over the ExpressRoute circuit helps to simplify the network, so that on-premises prefixes only appear in Azure as advertised by the NVA (and not from the ExpressRoute gateway).
## Design without overlays The cross-region tunnels between the NVAs are required because otherwise a routing loop is formed. For example, looking at the NVA in region 1: -- The NVA in region 1 learns the prefixes from region 2, and advertises them to the Route Server in region 1-- The Route Server in region 1 will inject routes for those prefixes in all subnets in the local region, with the NVA in region 1 as the next hop
+- The NVA in region 1 learns the prefixes from region 2, and advertises them to the Route Server in region 1.
+- The Route Server in region 1 will inject routes for those prefixes in all subnets in region 1, with the NVA in region 1 as the next hop.
- For traffic from region 1 to region 2, when the NVA in region 1 sends traffic to the other NVA, its own subnet inherits as well the routes programmed by the Route Server, which are pointing to itself (the NVA). So the packet is returned to the NVA, and a routing loop appears.
-If UDRs are an option, you could disable BGP route propagation in the NVAs' subnets, and configure static UDRs instead of an overlay, so that Azure can route traffic to the remote spokes.
+If UDRs are an option, you could disable BGP route propagation in the NVAs' subnets, and configure static UDRs instead of an overlay, so that Azure can route traffic to the remote spoke VNets.
## Next steps
-* [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md)
-* [Learn how Azure Route Server works with a network virtual appliance](resource-manager-template-samples.md)
+* Learn more about [Azure Route Server support for ExpressRoute and Azure VPN](expressroute-vpn-support.md)
+* Learn how to [Configure peering between Azure Route Server and network virtual appliance](tutorial-configure-route-server-with-quagga.md)
sap High Availability Guide Windows Azure Files Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-windows-azure-files-smb.md
# High availability for SAP NetWeaver on Azure VMs on Windows with Azure Files Premium SMB for SAP applications ## Introduction
-Azure Files Premium SMB is now fully supported by Microsoft and SAP. **SWPM 1.0 SP32** and **SWPM 2.0 SP09** and above support Azure Files Premium SMB storage. There are special requirements for sizing Azure Files Premium SMB shares. This documentation contains specific recommendations on how to distribute workload on Azure Files Premium SMB, how to adequately size Azure Files Premium SMB and the minimum installation requirements for Azure Files Premium SMB.
+Azure Files Premium SMB is now fully supported by Microsoft and SAP. **SWPM 1.0 SP32** and **SWPM 2.0 SP09** and higher support Azure Files Premium SMB storage. There are special requirements for sizing Azure Files Premium SMB shares. This documentation contains specific recommendations on how to distribute workload on Azure Files Premium SMB, how to adequately size Azure Files Premium SMB and the minimum installation requirements for Azure Files Premium SMB.
-High Availability SAP solutions need a highly available File share for hosting **sapmnt**, **trans** and **interface directories**. Azure Files Premium SMB is a simple Azure PaaS solution for Shared File Systems for SAP on Windows environments. Azure Files Premium SMB can be used in conjunction with Availability Sets and Availability Zones. Azure Files Premium SMB can also be used for Disaster Recovery scenarios to another region.
+High Availability SAP solutions need a highly available File share for hosting **sapmnt**, **trans** and **interface directories**. Azure Files Premium SMB is a simple Azure PaaS solution for Shared File Systems for SAP on Windows environments. Azure Files Premium SMB can be used with Availability Sets and Availability Zones. Azure Files Premium SMB can also be used for Disaster Recovery scenarios to another region.
> [!NOTE] > Clustering SAP ASCS/SCS instances by using a file share is supported for SAP systems with SAP Kernel 7.22 (and later). For details see SAP note [2698948](https://launchpad.support.sap.com/#/notes/2698948)
High Availability SAP solutions need a highly available File share for hosting *
## Sizing & Distribution of Azure Files Premium SMB for SAP Systems The following points should be evaluated when planning the deployment of Azure Files Premium SMB:
-* The File share name **sapmnt** can be created once per storage account. It is possible to create additional SIDs as directories on the same **/sapmnt** share such as - **/sapmnt/\<SID1\>** and **/sapmnt/\<SID2\>**
-* Choose an appropriate size, IOPS and throughput. A suggested size for the share is 256GB per SID. The maximum size for a Share is 5120 GB
-* Azure Files Premium SMB may not perform optimally for very large **sapmnt** shares with more than 1-2 million files per storage account.  Customers that have millions of batch jobs creating millions of job log files should regularly reorganize them as per [SAP Note 16083][16083] If needed, old job logs may be moved/archived to another Azure Files Premium SMB.  If **sapmnt** is expected to be very large then alternate options (such as Azure ANF) should be considered.
-* It is recommended to use a Private Network Endpoint
-* Avoid consolidating too many SIDs to a single storage account and its file share.
-* As general guidance no more than between 2 to 4 non-prod SIDs can be consolidated together.
-* Do not consolidate the entire Development, QAS + Production landscape to one storage account and/or file share.ΓÇ» Failure of the share will lead to downtime of the entire SAP landscape.
-* It is not advisable to consolidate the **sapmnt** and **transport directories** on the same storage account except for very small systems. During the installation of the SAP PAS Instance, SAPInst will request a Transport Hostname. The FQDN of a different storage account should be entered <storage_account>.file.core.windows.net.
-* Do not consolidate the file system used for Interfaces onto the same storage account as **/sapmnt/\<SID>**
+* The File share name **sapmnt** can be created once per storage account. It's possible to create additional SIDs as directories on the same **/sapmnt** share such as - **/sapmnt/\<SID1\>** and **/sapmnt/\<SID2\>**
+* Choose an appropriate size, IOPS and throughput. A suggested size for the share is 256 GB per SID. The maximum size for a Share is 5120 GB
+* Azure Files Premium SMB may not perform well for very large **sapmnt** shares with more than 1-2 million files per storage account.  Customers that have millions of batch jobs creating millions of job log files should regularly reorganize them as per [SAP Note 16083][16083] If needed, old job logs may be moved/archived to another Azure Files Premium SMB.  If **sapmnt** is expected to be very large, then other options (such as Azure ANF) should be considered.
+* It's recommended to use a Private Network Endpoint
+* Avoid putting too many SIDs to a single storage account and its file share.
+* As general guidance no more than between 2 to 4 nonprod SIDs can be put together.
+* Don't put the entire Development, QAS + Production landscape in one storage account and/or file share.ΓÇ» Failure of the share leads to downtime of the entire SAP landscape.
+* It's recommended to put the **sapmnt** and **transport directories** on the different storage account except for smaller systems. During the installation of the SAP PAS Instance, SAPInst will requests Transport Hostname. The FQDN of a different storage account should be entered <storage_account>.file.core.windows.net.
+* Don't put the file system used for Interfaces onto the same storage account as **/sapmnt/\<SID>**
* The SAP users/groups must be added to the ΓÇÿsapmntΓÇÖ share and should have this permission set in the Azure portal: **Storage File Data SMB Share Elevated Contributor**.
-There are important reasons for separating **Transport**, **Interface** and **sapmnt** onto separate storage accounts. Distributing these components onto separate storage accounts improves throughput, resiliency and simplifies the performance analysis. If many SIDs and other file systems are consolidated onto a single Azure Files Storage account and the storage account performance is poor due to hitting the throughput limits, it is extremely difficult to identify which SID or application is causing the problem.
+There are important reasons for splitting **Transport**, **Interface** and **sapmnt** among separate storage accounts. Distributing these components among separate storage accounts improves throughput, resiliency and simplifies the performance analysis. If many SIDs and other file systems are put within a single Azure Files Storage account and the storage account performance is poor due to hitting the throughput limits, it's very difficult to identify which SID or application is causing the problem.
## Planning > [!IMPORTANT]
-> Installation of SAP High Availability Systems on Azure Files Premium SMB with Active Directory Integration requires cross team collaboration. It is highly recommended that the Basis Team, the Active Directory Team and the Azure Team work together to complete these tasks:
+> The installation of SAP High Availability Systems on Azure Files Premium SMB with Active Directory Integration requires cross team collaboration. It is highly recommended, that the Basis Team, the Active Directory Team and the Azure Team work together to achieve these tasks:
> * Azure Team ΓÇô setup and configuration of Storage Account, Script Execution and AD Directory Synchronization. * Active Directory Team ΓÇô Creation of User Accounts and Groups.
-* Basis Team ΓÇô Run SWPM and set ACLs (if required).
+* Basis Team ΓÇô Run SWPM and set ACLs (if necessary).
Prerequisites for the installation of SAP NetWeaver High Availability Systems on Azure Files Premium SMB with Active Directory Integration. * The SAP servers must be joined to an Active Directory Domain. * The Active Directory Domain containing the SAP servers must be replicated to Azure Active Directory using Azure AD connect.
-* It is highly recommended that there is at least one Active Directory Domain controller in the Azure landscape to avoid traversing the Express Route to contact Domain Controllers on-premises.
-* The Azure support team should review the Azure Files SMB with [Active Directory Integration](../../storage/files/storage-files-identity-auth-active-directory-enable.md#videos) documentation. *The video shows additional configuration options which were modified (DNS) and skipped (DFS-N) for simplification reasons.* Nevertheless these are valid configuration options.
+* It is highly recommended, that there is at least one Active Directory Domain controller in the Azure landscape to avoid traversing the Express Route to contact Domain Controllers on-premises.
+* The Azure support team should review the Azure Files SMB with [Active Directory Integration](../../storage/files/storage-files-identity-auth-active-directory-enable.md#videos) documentation. *The video shows extra configuration options, which were modified (DNS) and skipped (DFS-N) for simplification reasons.* Nevertheless these are valid configuration options.
* The user executing the Azure Files PowerShell script must have permission to create objects in Active Directory. * **SWPM version 1.0 SP32 and SWPM 2.0 SP09 or higher are required. SAPInst patch must be 749.0.91 or higher.** * An up-to-date release of PowerShell should be installed on the Windows Server where the script is executed.
Prerequisites for the installation of SAP NetWeaver High Availability Systems on
> ![Azure portal Screenshot for create storage account - Step 2](media/virtual-machines-shared-sap-high-availability-guide/create-storage-account-2.png)Azure portal Screenshot for create storage account - Step 2
- In this screen the default settings should be ok.
+ In this screen, the default settings should be ok.
![Azure portal Screenshot for create storage account - Step 3](media/virtual-machines-shared-sap-high-availability-guide/create-sa-4.png)Azure portal Screenshot for create storage account - Step 3 In this step the decision to use a private endpoint is made. 1. **Select Private Network Endpoint** for the storage account.
- If required add a DNS A-Record into Windows DNS for the **<storage_account_name>.file.core.windows.net** (this may need to be in a new DNS Zone). Discuss this topic with the DNS administrator. The new zone should not update outside of an organization.
+ If necessary add a DNS A-Record into Windows DNS for the **<storage_account_name>.file.core.windows.net** (this may need to be in a new DNS Zone). Discuss this topic with the DNS administrator. The new zone should not update outside of an organization.
![pivate-endpoint-creation](media/virtual-machines-shared-sap-high-availability-guide/create-sa-3.png)Azure portal screenshot for the private endpoint definition. ![private-endpoint-dns-1](media/virtual-machines-shared-sap-high-availability-guide/pe-dns-1.png)DNS server screenshot for private endpoint DNS definition.
- 1. Create the **sapmnt** File share with an appropriate size. The suggested size is 256GB which delivers 650 IOPS, 75 MB/sec Egress and 50 MB/sec Ingress.
+ 1. Create the **sapmnt** File share with an appropriate size. The suggested size is 256 GB, which delivers 650 IOPS, 75 MB/sec Egress and 50 MB/sec Ingress.
![create-storage-account-5](media/virtual-machines-shared-sap-high-availability-guide/create-sa-5.png)Azure portal screenshot for the SMB share definition. 1. Download the [Azure Files GitHub](../../storage/files/storage-files-identity-ad-ds-enable.md#download-azfileshybrid-module) content and execute the [script](../../storage/files/storage-files-identity-ad-ds-enable.md#run-join-azstorageaccount).
- This script will create either a Computer Account or Service Account in Active Directory. The user running the script must have the following properties:
+ This script creates either a Computer Account or Service Account in Active Directory. The user running the script must have the following properties:
* The user running the script must have permission to create objects in the Active Directory Domain containing the SAP servers. Typically, a domain administrator account is used such as **SAPCONT_ADMIN@SAPCONTOSO.local**
- * Before executing the script confirm that this Active Directory Domain user account is synchronized with Azure Active Directory (AAD). An example of this would be to open the Azure portal and navigate to AAD users and check that the user **SAPCONT_ADMIN@SAPCONTOSO.local** exists and verify the AAD user account **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com**.
- * Grant the **Contributor RBAC** role to this Azure Active Directory user account for the Resource Group containing the storage account holding the File Share. In this example the user **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com** is granted **Contributor Role** to the respective Resource Group
+ * Before executing the script confirm that this Active Directory Domain user account is synchronized with Azure Active Directory (Azure AD). An example of this would be to open the Azure portal and navigate to Azure AD users and check that the user **SAPCONT_ADMIN@SAPCONTOSO.local** exists and verify the Azure AD user account **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com**.
+ * Grant the **Contributor RBAC** role to this Azure Active Directory user account for the Resource Group containing the storage account holding the File Share. In this example, the user **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com** is granted **Contributor Role** to the respective Resource Group
* The script should be executed while logged on to a Windows server using an Active Directory Domain user account with the permission as specified above, in this example the account **SAPCONT_ADMIN@SAPCONTOSO.local** would be used. >[!IMPORTANT] > When executing the PowerShell script command **Connect-AzAccount**, it is highly recommended to enter the Azure Active Directory user account that corresponds and maps to the Active Directory Domain user account used to logon to a Windows Server, in this example this is the user account **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com** >
- In this example scenario the Active Directory Administrator would logon to the Windows Server as **SAPCONT_ADMIN@SAPCONTOSO.local** and when using the **PS command Connect-AzAccount** connect as user **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com**. Ideally the Active Directory Administrator and the Azure Administrator should work together on this task.
- ![powershell-script-1](media/virtual-machines-shared-sap-high-availability-guide/ps-script-1.png)Screenshot of the PowerShell script creating local AD account.
+ In this example scenario, the Active Directory Administrator would logon to the Windows Server as **SAPCONT_ADMIN@SAPCONTOSO.local** and when using the **PS command Connect-AzAccount** connect as user **SAPCONT_ADMIN@SAPCONTOSO.onmicrosoft.com**. Ideally the Active Directory Administrator and the Azure Administrator should work together on this task.
+ ![Screenshot of the PowerShell script creating local AD account.](media/virtual-machines-shared-sap-high-availability-guide/ps-script-1.png)
![smb-configured-screenshot](media/virtual-machines-shared-sap-high-availability-guide/smb-config-1.png)Azure portal screenshot after successful PowerShell script execution.
Prerequisites for the installation of SAP NetWeaver High Availability Systems on
> [!IMPORTANT] > This step must be completed before the SAPInst installation or it will be difficult or impossible to change ACLs after SAPInst has created directories and files on the File Share >
- ![ACL Properties](media/virtual-machines-shared-sap-high-availability-guide/smb-share-properties-1.png)Windows Explorer screenshot of the assigned user rights.
The following screenshots show how to add Computer machine accounts by selecting the Object Types -> Computers ![Windows Server screenshot of adding the cluster name to the local AD](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-2.png)Windows Server screenshot of adding the cluster name to the local AD.
Prerequisites for the installation of SAP NetWeaver High Availability Systems on
![Screenshot of adding AD computer account - Step 3](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-4.png)Screenshot of adding AD computer account - Step 3 ![Screenshot of computer account access properties](media/virtual-machines-shared-sap-high-availability-guide/add-computer-account-5.png)Screenshot of computer account access properties.
- 8. If required move the Computer Account created for Azure Files to an Active Directory Container that does not have account expiry. The name of the Computer Account will be the short name of the storage account
+ 8. If necessary move the Computer Account created for Azure Files to an Active Directory Container that doesn't have account expiry. The name of the Computer Account will be the short name of the storage account
> [!IMPORTANT]
Prerequisites for the installation of SAP NetWeaver High Availability Systems on
4. Basis administrator should complete the tasks below: 1. [Install the Windows Cluster on ASCS/ERS Nodes and add the Cloud witness](sap-high-availability-infrastructure-wsfc-shared-disk.md#0d67f090-7928-43e0-8772-5ccbf8f59aab)
- 2. The first Cluster Node installation will ask for the Azure Files SMB storage account name. Enter the FQDN <storage_account_name>.file.core.windows.net. If SAPInst does not accept >13 characters then the SWPM version is too old.
+ 2. The first Cluster Node installation asks for the Azure Files SMB storage account name. Enter the FQDN <storage_account_name>.file.core.windows.net. If SAPInst doesn't accept >13 characters, then the SWPM version is too old.
3. [Modify the SAP Profile of the ASCS/SCS Instance](sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761) 4. [Update the Probe Port for the SAP \<SID> role in WSFC](sap-high-availability-installation-wsfc-shared-disk.md#10822f4f-32e7-4871-b63a-9b86c76ce761) 5. Continue with SWPM Installation for the second ASCS/ERS Node. SWPM will only require path of profile directory. Enter the full UNC path to the profile directory. 6. Enter the UNC profile path for the DB and PAS/AAS Installation.
- 7. PAS Installation will ask for Transport hostname. Provide the FQDN of a separate storage account name for transport directory.
+ 7. PAS Installation asks for Transport hostname. Provide the FQDN of a separate storage account name for transport directory.
8. Verify the ACLs on the SID and trans directory. ## Disaster Recovery setup
The PowerShell scripts downloaded in step 3.c contain a debug script to conduct
```powershell Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -Verbose ```
-![Powershell-script-output](media/virtual-machines-shared-sap-high-availability-guide/smb-share-validation-2.png)PowerShell screenshot of the debug script output.
+![Screenshot of PowerShell script to validate configuration.](media/virtual-machines-shared-sap-high-availability-guide/smb-share-validation-2.png)PowerShell screenshot of the debug script output.
-![Powershell-script-technical-info](media/virtual-machines-shared-sap-high-availability-guide/smb-share-validation-1.png)The following screen shows the technical information to validate a successful domain join.
+![Screenshot of PowerShell script to retrieve technical info.](media/virtual-machines-shared-sap-high-availability-guide/smb-share-validation-1.png)The following screen shows the technical information to validate a successful domain join.
## Useful links & resources * SAP Note [2273806][2273806] SAP support for storage or file system related solutions
Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGrou
* [Installation of an (A)SCS Instance on a Failover Cluster](https://www.sap.com/documents/2017/07/f453332f-c97c-0010-82c7-eda71af511fa.html) [16083]:https://launchpad.support.sap.com/#/notes/16083
-[2273806]:https://launchpad.support.sap.com/#/notes/2273806
+[2273806]:https://launchpad.support.sap.com/#/notes/2273806
+
+## Optional configurations
+
+The following diagrams show multiple SAP instances on Azure VMs running Microsoft Windows Failover Cluster to reduce the total number of VMs.
+
+This can either be local SAP Application Servers on an SAP ASCS/SCS cluster or an SAP ASCS/SCS Cluster Role on Microsoft SQL Server Always On nodes.
+
+> [!IMPORTANT]
+> Installing a local SAP Application Server on a SQL Server Always On node is not supported.
+>
+
+Both, SAP ASCS/SCS and the Microsoft SQL Server database, are single points of failure (SPOF). To protect these SPOFs in a Windows environment Azure Files SMB is used.
+
+While the resource consumption of the SAP ASCS/SCS is fairly small, a reduction of the memory configuration for either SQL Server or the SAP Application Server by 2 GB is recommended.
+
+### <a name="5121771a-7618-4f36-ae14-ccf9ee5f2031"></a>SAP Application Servers on WSFC nodes using Azure Files SMB
+
+![Screenshot of HA setup with additional application servers.](media/virtual-machines-shared-sap-high-availability-guide/ha-azure-files-smb-as.png)SAP application Servers locally installed.
+
+> [!NOTE]
+> The picture shows the use of additional local disks. This is optional for customers who will not install application software on the OS drive (C:\)
+>
+
+### <a name="01541cf2-0a03-48e3-971e-e03575fa7b4f"></a> SAP ASCS/SCS on SQL Server Always On nodes using Azure Files SMB
+
+> [!IMPORTANT]
+> Using Azure Files SMB for any SQL Server volume is not supported.
+>
+
+![Diagram of SAP ASCS/SCS on SQL Server Always On nodes using Azure Screenshot of Azure Files SMB with local SQL Server setup.](media/virtual-machines-shared-sap-high-availability-guide/ha-sql-ascs-azure-files-smb.png)SAP ASCS/SCS on SQL Server Always On nodes using Azure Files SMB
+
+> [!NOTE]
+> The picture shows the use of additional local disks. This is optional for customers who will not install application software on the OS drive (C:\)
+>
sap Lama Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/lama-installation.md
Also read the [SAP Help Portal for SAP LaMa](https://help.sap.com/viewer/p/SAP_L
* Make sure to enable *Automatic Mountpoint Creation* in Setup -> Settings -> Engine If SAP LaMa mounts volumes using the SAP Adaptive Extensions on a virtual machine, the mount point must exist if this setting is not enabled.
-* Use separate subnet and don't use dynamic IP addresses to prevent IP address "stealing" when deploying new VMs and SAP instances are unprepared
- If you use dynamic IP address allocation in the subnet, which is also used by SAP LaMa, preparing an SAP system with SAP LaMa might fail. If an SAP system is unprepared, the IP addresses are not reserved and might get allocated to other virtual machines.
+* Use a separate subnet and don't use dynamic IP addresses to prevent IP address "stealing" when deploying new VMs and SAP instances are unprepared
+ - If you use dynamic IP address allocation in the subnet, which is also used by SAP LaMa, preparing an SAP system with SAP LaMa might fail. If an SAP system is unprepared, the IP addresses are not reserved and might get allocated to other virtual machines.
* If you sign in to managed hosts, make sure to not block file systems from being unmounted
- If you sign in to a Linux virtual machines and change the working directory to a directory in a mount point, for example /usr/sap/AH1/ASCS00/exe, the volume cannot be unmounted and a relocate or unprepare fails.
+ - If you sign in to a Linux virtual machines and change the working directory to a directory in a mount point, for example /usr/sap/AH1/ASCS00/exe, the volume cannot be unmounted and a relocate or unprepare fails.
* Make sure to disable CLOUD_NETCONFIG_MANAGE on SUSE SLES Linux virtual machines. For more details, see [SUSE KB 7023633](https://www.suse.com/support/kb/doc/?id=7023633).
The Azure connector can use a Service Principal to authorize against Microsoft A
1. Write down the Value. It is used as the password for the Service Principal 1. Write down the Application ID. It is used as the username of the Service Principal
-The Service Principal does not have permissions to access your Azure resources by default.
+By default the Service Principal doesn't have permissions to access your Azure resources.
Assign the Contributor role to the Service Principal at resource group scope for all resource groups that contain SAP systems that should be managed by SAP LaMa. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
For detailed steps, see [Assign Azure roles using the Azure portal](../../role-b
To be able to use a Managed Identity, your SAP LaMa instance has to run on an Azure VM that has a system or user assigned identity. For more information about Managed Identities, read [What is managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md) and [Configure managed identities for Azure resources on a VM using the Azure portal](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md).
-The Managed Identity does not have permissions to access your Azure resources by default.
+By default the Managed Identity doesn't have permissions to access your Azure resources.
Assign the Contributor role to the Virtual Machine identity at resource group scope for all resource groups that contain SAP systems that should be managed by SAP LaMa. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-In your SAP LaMa Azure connector configuration, select 'Use Managed Identity' to enable the usage of the Managed Identity. If you want to use a system assigned identity, make sure to leave the User Name field empty. If you want to use a user assigned identity, enter the user assigned identity Id into the User Name field.
+In your SAP LaMa Azure connector configuration, select 'Use Managed Identity' to enable the use of the Managed Identity. If you want to use a system assigned identity, make sure to leave the User Name field empty. If you want to use a user assigned identity, enter the user assigned identity ID into the User Name field.
### Create a new connector in SAP LaMa
Open the SAP LaMa website and navigate to Infrastructure. Go to tab Cloud Manage
* Azure Active Directory Tenant ID: ID of the Active Directory tenant * Proxy host: Hostname of the proxy if SAP LaMa needs a proxy to connect to the internet * Proxy port: TCP port of the proxy
-* Change Storage Type to save costs: Enable this setting if the Azure Adapter should change the storage type of the Managed Disks to save costs when the disks are not in use. For data disks that are referenced in an SAP instance configuration, the adapter will change the disk type to Standard Storage during an instance unprepare and back to the original storage type during an instance prepare. If you stop a virtual machine in SAP LaMa, the adapter will change the storage type of all attached disks, including the OS disk to Standard Storage. If you start a virtual machine in SAP LaMa, the adapter will change the storage type back to the original storage type.
+* Change Storage Type to save costs: Enable this setting if the Azure Adapter should change the storage type of the Managed Disks to save costs when the disks are not in use. For data disks that are referenced in an SAP instance configuration, the adapter changes the disk type to Standard Storage during an instance unprepare and back to the original storage type during an instance prepare. If you stop a virtual machine in SAP LaMa, the adapter changes the storage type of all attached disks, including the OS disk to Standard Storage. If you start a virtual machine in SAP LaMa, the adapter changes the storage type back to the original storage type.
Click on Test Configuration to validate your input. You should see
SAP LaMa communicates with the virtual machine using the SAP Host Agent. If you
#### Manual deployment of a Linux Virtual Machine
-Create a new virtual machine with one of the supported operation systems listed in SAP Note [2343511]. Add additional IP configurations for the SAP instances. Each instance needs at least on IP address and must be installed using a virtual hostname.
+Create a new virtual machine with one of the supported operation systems listed in SAP Note [2343511]. Add more IP configurations for the SAP instances. Each instance needs at least on IP address and must be installed using a virtual hostname.
-The SAP NetWeaver ASCS instance needs disks for /sapmnt/\<SAPSID>, /usr/sap/\<SAPSID>, /usr/sap/trans, and /usr/sap/\<sapsid>adm. The SAP NetWeaver application servers do not need additional disks. Everything related to the SAP instance must be stored on the ASCS and exported via NFS. Otherwise, it is currently not possible to add additional application servers using SAP LaMa.
+The SAP NetWeaver ASCS instance needs disks for /sapmnt/\<SAPSID>, /usr/sap/\<SAPSID>, /usr/sap/trans, and /usr/sap/\<sapsid>adm. The SAP NetWeaver application servers do not need more disks. Everything related to the SAP instance must be stored on the ASCS and exported via NFS. Otherwise, it is currently not possible to add more application servers using SAP LaMa.
![SAP NetWeaver ASCS on Linux](media/lama/sap-lama-ascs-app-linux.png) #### Manual deployment for SAP HANA
-Create a new virtual machine with one of the supported operation systems for SAP HANA as listed in SAP Note [2343511]. Add one additional IP configuration for SAP HANA and one per HANA tenant.
+Create a new virtual machine with one of the supported operation systems for SAP HANA as listed in SAP Note [2343511]. Add one extra IP configuration for SAP HANA and one per HANA tenant.
SAP HANA needs disks for /hana/shared, /hana/backup, /hana/data, and /hana/log
SAP HANA needs disks for /hana/shared, /hana/backup, /hana/data, and /hana/log
#### Manual deployment for Oracle Database on Linux
-Create a new virtual machine with one of the supported operation systems for Oracle databases as listed in SAP Note [2343511]. Add one additional IP configuration for the Oracle database.
+Create a new virtual machine with one of the supported operation systems for Oracle databases as listed in SAP Note [2343511]. Add one extra IP configuration for the Oracle database.
The Oracle database needs disks for /oracle, /home/oraod1, and /home/oracle
The Oracle database needs disks for /oracle, /home/oraod1, and /home/oracle
#### Manual deployment for Microsoft SQL Server
-Create a new virtual machine with one of the supported operation systems for Microsoft SQL Server as listed in SAP Note [2343511]. Add one additional IP configuration for the SQL Server instance.
+Create a new virtual machine with one of the supported operation systems for Microsoft SQL Server as listed in SAP Note [2343511]. Add one extra IP configuration for the SQL Server instance.
The SQL Server database server needs disks for the database data and log files and disks for c:\usr\sap.
The templates have the following parameters:
* osType: The type of the operating system you want to deploy.
-* dbtype: The type of the database. This parameter is used to determine how many additional IP configurations need to be added and how the disk layout should look like.
+* dbtype: The type of the database. This parameter is used to determine how many extra IP configurations need to be added and how the disk layout should look like.
* sapSystemSize: The size of the SAP System you want to deploy. It is used to determine the virtual machine instance type and size.
The templates have the following parameters:
* sapsysGid: The Linux group ID of the sapsys group. Not required for Windows.
-* _artifactsLocation: The base URI, where artifacts required by this template are located. When the template is deployed using the accompanying scripts, a private location in the subscription will be used and this value will be automatically generated. Only needed if you do not deploy the template from GitHub.
+* _artifactsLocation: The base URI, where artifacts required by this template are located. When the template is deployed using the accompanying scripts, a private location in the subscription is used and this value is automatically generated. Only needed if you do not deploy the template from GitHub.
-* _artifactsLocationSasToken: The sasToken required to access _artifactsLocation. When the template is deployed using the accompanying scripts, a sasToken will be automatically generated. Only needed if you do not deploy the template from GitHub.
+* _artifactsLocationSasToken: The sasToken required to access _artifactsLocation. When the template is deployed using the accompanying scripts, a sasToken is automatically generated. Only needed if you do not deploy the template from GitHub.
### SAP HANA
-In the examples below, we assume that you install SAP HANA with system ID HN1 and the SAP NetWeaver system with system ID AH1. The virtual hostnames are hn1-db for the HANA instance, ah1-db for the HANA tenant used by the SAP NetWeaver system, ah1-ascs for the SAP NetWeaver ASCS and ah1-di-0 for the first SAP NetWeaver application server.
+In the following examples, we assume that you install SAP HANA with system ID HN1 and the SAP NetWeaver system with system ID AH1. The virtual hostnames are hn1-db for the HANA instance, ah1-db for the HANA tenant used by the SAP NetWeaver system, ah1-ascs for the SAP NetWeaver ASCS and ah1-di-0 for the first SAP NetWeaver application server.
#### Install SAP NetWeaver ASCS for SAP HANA using Azure Managed Disks
Add the following profile parameter to the SAP Host Agent profile, which is loca
acosprep/nfs_paths=/home/ah1adm,/usr/sap/trans,/sapmnt/AH1,/usr/sap/AH1 ```
-#### Install SAP NetWeaver ASCS for SAP HANA on Azure NetAppFiles (ANF) BETA
-
-> [!NOTE]
-> This functionality is nor GA yet. For more information refer to SAP Note [2815988] (only visible to preview customers).
-Open an SAP incident on component BC-VCM-LVM-HYPERV and request to join the LaMa storage adapter for Azure NetApp Files preview
+#### Install SAP NetWeaver ASCS for SAP HANA on Azure NetAppFiles (ANF)
ANF provides NFS for Azure. In the context of SAP LaMa this simplifies the creation of the ABAP Central Services (ASCS) instances and the subsequent installation of application servers. Previously the ASCS instance had to act as NFS server as well and the parameter acosprep/nfs_paths had to be added to the host_profile of the SAP Hostagent.
-#### ANF is currently available in these regions:
-
-Australia East, Central US, East US, East US 2, North Europe, South Central US, West Europe and West US 2.
- #### Network Requirements
-ANF requires a delegated subnet which must be part of the same VNET as the SAP servers. Here's an example for such a configuration.
+ANF requires a delegated subnet, which must be part of the same VNET as the SAP servers. Here's an example for such a configuration.
This screen shows the creation of the VNET and the first subnet: ![SAP LaMa create virtual network for Azure ANF ](media/lama/sap-lama-createvn-50.png)
Now a NetApp account needs to be created within the Azure portal:
![SAP LaMa NetApp account created ](media/lama/sap-lama-netappaccount.png)
-Within the NetApp account the capacity pool specifies the size and type of disks for each pool:
+Within the NetApp account, the capacity pool specifies the size and type of disks for each pool:
![SAP LaMa create NetApp capacity pool ](media/lama/sap-lama-capacitypool-50.png) ![SAP LaMa NetApp capacity pool created ](media/lama/sap-lama-capacitypool-list.png)
-The NFS volumes can now be defined. Since there will be volumes for multiple systems in one pool, a self-explaining naming scheme should be chosen. Adding the SID helps to group related volumes together. For the ASCS and the AS instance the following mounts are needed: */sapmnt/\<SID\>*, */usr/sap/\<SID\>*, and */home/\<sid\>adm*. Optionally, */usr/sap/trans* is needed for the central transport directory, which is at least used by all systems of one landscape.
-
-> [!NOTE]
-> During the BETA phase the name of the volumes must be unique within the subscription.
+The NFS volumes can now be defined. Since there might be volumes for multiple systems in one pool, a self-explaining naming scheme should be chosen. Adding the SID helps to group related volumes together. For the ASCS and the AS instance, the following mounts are needed: */sapmnt/\<SID\>*, */usr/sap/\<SID\>*, and */home/\<sid\>adm*. Optionally, */usr/sap/trans* is needed for the central transport directory, which is at least used by all systems of one landscape.
![SAP LaMa create a volume 1 ](media/lama/sap-lama-createvolume-80.png)
These steps need to be repeated for the other volumes as well.
![SAP LaMa list of created volumes ](media/lama/sap-lama-volumes.png)
-Now these volumes need to be mounted to the systems where the initial installation with the SAP SWPM will be performed.
+Now these volumes need to be mounted to the systems where the initial installation with the SAP SWPM is performed.
-First the mount points need to be created. In this case the SID is AN1 so the following commands need to be executed:
+First the mount points need to be created. In this case, the SID is AN1 so the following commands need to be executed:
```bash mkdir -p /home/an1adm
mkdir -p /sapmnt/AN1
mkdir -p /usr/sap/AN1 mkdir -p /usr/sap/trans ```
-Next the ANF volumes will be mounted with the following commands:
+Next the ANF volumes are mounted with the following commands:
```bash # sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-home-sidadm /home/an1adm
Next the ANF volumes will be mounted with the following commands:
# sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/an1-usr-sap-sid /usr/sap/AN1 # sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 9.9.9.132:/global-usr-sap-trans /usr/sap/trans ```
-The mount commands can also be derived from the portal. The local mount points need to adjusted.
+The mount commands can also be looked up from the portal. The local mount points need to be adjusted.
Use the df -h command to verify.
Now the installation with SWPM must be performed.
The same steps must be performed for at least one AS instance.
-After the successful installation the system must be discovered within SAP LaMa.
+After the successful installation, the system must be discovered within SAP LaMa.
The mount points should look like this for the ASCS and the AS instance:
Make sure to back up the SYSTEMDB and all tenant databases before you try to do
### Microsoft SQL Server
-In the examples below, we assume that you install the SAP NetWeaver system with system ID AS1. The virtual hostnames are as1-db for the SQL Server instance used by the SAP NetWeaver system, as1-ascs for the SAP NetWeaver ASCS and as1-di-0 for the first SAP NetWeaver application server.
+In the following examples, we assume that you install the SAP NetWeaver system with system ID AS1. The virtual hostnames are as1-db for the SQL Server instance used by the SAP NetWeaver system, as1-ascs for the SAP NetWeaver ASCS and as1-di-0 for the first SAP NetWeaver application server.
#### Install SAP NetWeaver ASCS for SQL Server
Use *as1-di-0* for the *PAS Instance Host Name* in dialog *Primary Application S
* Error when full copy is not enabled in Storage Step * An error occurred when reporting a context attribute message for path IStorageCopyData.storageVolumeCopyList:1 and field targetStorageSystemId * Solution
- Ignore Warnings in step and try again. This issue will be fixed in a new support package/patch of SAP LaMa.
+ Ignore Warnings in step and try again. This issue is fixed in a new support package/patch of SAP LaMa.
### Errors and Warnings during Relocate
sap Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-one-region.md
In this scenario, data that's replicated to the HANA instance in the second VM i
### SAP HANA system replication with automatic failover
-In the standard and most common availability configuration within one Azure region, two Azure VMs running SLES Linux have a failover cluster defined. The SLES Linux cluster is based on the [Pacemaker](./high-availability-guide-suse-pacemaker.md) framework, in conjunction with a [fencing device](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device).
+In the standard and most common availability configuration within one Azure region, two Azure VMs running Linux with HA packages have a failover cluster defined. The HA Linux cluster is based on the `Pacemaker` framework using [SLES](./high-availability-guide-suse-pacemaker.md) or [RHEL](./high-availability-guide-rhel-pacemaker.md) in conjunction with a `fencing device` [SLES](./high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) or [RHEL](./high-availability-guide-rhel-pacemaker.md#create-fencing-device) as an example.
From an SAP HANA perspective, the replication mode that's used is synced and an automatic failover is configured. In the second VM, the SAP HANA instance acts as a hot standby node. The standby node receives a synchronous stream of change records from the primary SAP HANA instance. As transactions are committed by the application at the HANA primary node, the primary HANA node waits to confirm the commit to the application until the secondary SAP HANA node confirms that it received the commit record. SAP HANA offers two synchronous replication modes. For details and for a description of differences between these two synchronous replication modes, see the SAP article [Replication modes for SAP HANA system replication](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c039a1a5b8824ecfa754b55e0caffc01.html).
For step-by-step guidance on setting up these configurations in Azure, see:
For more information about SAP HANA availability across Azure regions, see: -- [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md)
+- [SAP HANA availability across Azure regions](./sap-hana-availability-across-regions.md)
sentinel Fortinet Fortiweb Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fortinet-fortiweb-web-application-firewall.md
The [fortiweb](https://www.fortinet.com/products/web-application-firewall/fortiw
**Top 10 Threats** ```kusto
-Fortiweb
+CommonSecurityLog
| where isnotempty(EventType)
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 03/27/2023 Last updated : 03/31/2023
This article summarizes support and prerequisites for disaster recovery of Azure VMs from one Azure region to another, using the [Azure Site Recovery](site-recovery-overview.md) service. - ## Deployment method support **Deployment** | **Support**
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
16.04 LTS | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 16.04 LTS kernels supported in this release. | |||
-18.04 LTS |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.4.0-137-generic <br> 5.4.0-1101-azure |
+18.04 LTS |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.4.0-137-generic <br> 5.4.0-1101-azure <br> 4.15.0-1161-azure <br> 4.15.0-204-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 4.15.0-206-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic |
18.04 LTS |[9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)| 4.15.0-196-generic <br> 4.15.0-1157-azure <br> 5.4.0-1098-azure <br> 4.15.0-1158-azure <br> 4.15.0-1159-azure <br> 4.15.0-201-generic <br> 4.15.0-202-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic | 18.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic</br>4.15.0-1153-azure </br>4.15.0-194-generic </br>5.4.0-1094-azure </br>5.4.0-128-generic </br>5.4.0-131-generic | 18.04 LTS |[9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-191-generic </br> 4.15.0-192-generic </br>5.4.0-1089-azure </br>5.4.0-1090-azure </br>5.4.0-124-generic| 18.04 LTS |[9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-1146-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 4.15.0-189-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic | |||
-20.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.4.0-1101-azure |
+20.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.4.0-1101-azure <br> 5.15.0-1033-azure <br> 5.15.0-60-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 5.15.0-1034-azure <br> 5.15.0-67-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic |
20.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 5.4.0-1095-azure <br> 5.15.0-1023-azure <br> 5.4.0-1098-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic <br> 5.4.0-137-generic | 20.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br> 5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-22-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-40-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-51-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic </br> 5.15.0-1021-azure </br> 5.15.0-1022-azure </br> 5.15.0-50-generic </br> 5.15.0-52-generic </br> 5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic | 20.04 LTS |[9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic </br> 5.4.0-124-generic </br> 5.4.0-125-generic | 20.04 LTS |[9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 20.04 LTS kernels supported in this release. | |||
-22.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.15.0-1003-azure <br> 5.15.0-1005-azure <br> 5.15.0-1007-azure <br> 5.15.0-1008-azure <br> 5.15.0-1010-azure <br> 5.15.0-1012-azure <br> 5.15.0-1013-azure <br> 5.15.0-1014-azure <br> 5.15.0-1017-azure <br> 5.15.0-1019-azure <br> 5.15.0-1020-azure <br> 5.15.0-1021-azure <br> 5.15.0-1022-azure <br> 5.15.0-1023-azure <br> 5.15.0-1024-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-25-generic <br> 5.15.0-27-generic <br> 5.15.0-30-generic <br> 5.15.0-33-generic <br> 5.15.0-35-generic <br> 5.15.0-37-generic <br> 5.15.0-39-generic <br> 5.15.0-40-generic <br> 5.15.0-41-generic <br> 5.15.0-43-generic <br> 5.15.0-46-generic <br> 5.15.0-47-generic <br> 5.15.0-48-generic <br> 5.15.0-50-generic <br> 5.15.0-52-generic <br> 5.15.0-53-generic <br> 5.15.0-56-generic <br> 5.15.0-57-generic <br> 5.15.0-58-generic |
+22.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.15.0-1003-azure <br> 5.15.0-1005-azure <br> 5.15.0-1007-azure <br> 5.15.0-1008-azure <br> 5.15.0-1010-azure <br> 5.15.0-1012-azure <br> 5.15.0-1013-azure <br> 5.15.0-1014-azure <br> 5.15.0-1017-azure <br> 5.15.0-1019-azure <br> 5.15.0-1020-azure <br> 5.15.0-1021-azure <br> 5.15.0-1022-azure <br> 5.15.0-1023-azure <br> 5.15.0-1024-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-25-generic <br> 5.15.0-27-generic <br> 5.15.0-30-generic <br> 5.15.0-33-generic <br> 5.15.0-35-generic <br> 5.15.0-37-generic <br> 5.15.0-39-generic <br> 5.15.0-40-generic <br> 5.15.0-41-generic <br> 5.15.0-43-generic <br> 5.15.0-46-generic <br> 5.15.0-47-generic <br> 5.15.0-48-generic <br> 5.15.0-50-generic <br> 5.15.0-52-generic <br> 5.15.0-53-generic <br> 5.15.0-56-generic <br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.15.0-1033-azure <br> 5.15.0-60-generic <br> 5.15.0-1034-azure <br> 5.15.0-67-generic |
> [!NOTE] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
Debian 9.1 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azu
Debian 9.1 | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.9.0-19-amd64 |||
-Debian 10 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new Debian 10 kernels supported in this release. |
+Debian 10 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.10.0-0.deb10.21-amd64 <br> 5.10.0-0.deb10.21-cloud-amd64 |
Debian 10 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 4.19.0-23-amd64 <br> 4.19.0-23-cloud-amd64 <br> 5.10.0-0.deb10.20-amd64 <br> 5.10.0-0.deb10.20-cloud-amd64 | Debian 10 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 4.19.0-22-amd64 </br> 4.19.0-22-cloud-amd64 </br> 5.10.0-0.deb10.19-amd64 </br> 5.10.0-0.deb10.19-cloud-amd64 | Debian 10 | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 10 kernels supported in this release.
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.31-azure:4 |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.31-azure:4 <br> 5.14.21-150400.14.34-azure:4 |
SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.12-azure:4 <br> 5.14.21-150400.14.10-azure:4 <br> 5.14.21-150400.14.13-azure:4 <br> 5.14.21-150400.14.16-azure:4 <br> 5.14.21-150400.14.7-azure:4 <br> 5.3.18-150300.38.83-azure:3 <br> 5.14.21-150400.14.21-azure:4 <br> 5.14.21-150400.14.28-azure:4 <br> 5.3.18-150300.38.88-azure:3 | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.80-azure | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.75-azure:3 |
site-recovery Vmware Azure Install Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-mobility-service.md
Previously updated : 12/20/2022 Last updated : 03/31/2023
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Previously updated : 02/07/2023 Last updated : 03/31/2023 # Manage the Mobility agent
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
Previously updated : 12/13/2022 Last updated : 03/31/2023
spring-apps Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/cost-management.md
+
+ Title: Manage costs for Azure Spring Apps
+description: Learn about how to manage costs in Azure Spring Apps.
+++ Last updated : 03/28/2023++++
+# Manage costs for Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard ✔️ Enterprise
+
+This article describes the cost-saving options and capabilities that Azure Spring Apps provides.
+
+## Monthly free grants
+
+The first 50 vCPU hours and 100-GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog).
+
+## Start and stop instances
+
+If you have Azure Spring Apps instances that don't need to run continuously, you can save costs by reducing the number of running instances. For more information, see [Start or stop your Azure Spring Apps service instance](how-to-start-stop-service.md).
+
+## Standard consumption plan
+
+Unlike other pricing plans, the Standard consumption plan offers a pure consumption-based pricing model. You can dynamically add and remove resources based on the resource utilization, number of incoming HTTP requests, or by events. When running apps in a consumption plan, you're charged for active and idle usage of resources, and the number of requests. For more information, see the [Standard consumption plan](overview.md#standard-consumption-plan) section of [What is Azure Spring Apps?](overview.md)
+
+## Scale and autoscale
+
+You can manually scale computing capacities to accommodate a changing environment. For more information, see [Scale an application in Azure Spring Apps](how-to-scale-manual.md).
+
+Autoscale reduces operating costs by terminating redundant resources when they're no longer needed. For more information, see [Set up autoscale for applications](how-to-setup-autoscale.md).
+
+You can also set up autoscale rules for your applications in Azure Spring Apps Standard consumption plan. For more information, see [Quickstart: Set up autoscale for applications in Azure Spring Apps Standard consumption plan](quickstart-apps-autoscale-standard-consumption.md).
+
+## Next steps
+
+[Quickstart: Provision an Azure Spring Apps Standard consumption plan service instance](quickstart-provision-standard-consumption-service-instance.md)
spring-apps Diagnostic Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/diagnostic-services.md
Using the diagnostics functionality of Azure Spring Apps, you can analyze logs a
Choose the log category and metric category you want to monitor. > [!TIP]
-> Just want to stream your logs? Check out this [Azure CLI command](/cli/azure/spring/app#az-spring-cloud-app-logs)!
+> If you just want to stream your logs, you can use the Azure CLI command [az spring app logs](/cli/azure/spring/app#az-spring-app-logs).
## Logs
spring-apps How To Outbound Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-outbound-public-ip.md
To find the outbound public IP addresses currently used by your service instance
You can find the same information by running the following command in the Cloud Shell ```azurecli
-az spring show --resource-group <group_name> --name <service_name> --query properties.networkProfile.outboundIps.publicIps --output tsv
+az spring show --resource-group <group_name> --name <service_name> --query properties.networkProfile.outboundIPs.publicIPs --output tsv
``` ## Next steps
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
The owning user can change the permissions of the file to give themselves any RW
### Why do I sometimes see GUIDs in ACLs?
-A GUID is shown if the entry represents a user and that user doesn't exist in Azure AD anymore. Usually this happens when the user has left the company or if their account has been deleted in Azure AD. Additionally, service principals and security groups do not have a User Principal Name (UPN) to identify them and so they are represented by their OID attribute (a guid).
+A GUID is shown if the entry represents a user and that user doesn't exist in Azure AD anymore. Usually this happens when the user has left the company or if their account has been deleted in Azure AD. Additionally, service principals and security groups do not have a User Principal Name (UPN) to identify them and so they are represented by their OID attribute (a guid). To clean up the ACLs, manually delete these GUID entries.
### How do I set ACLs correctly for a service principal?
storage Elastic San Batch Create Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-batch-create-sample.md
Last updated 10/12/2022 -+ # Create multiple elastic SAN Preview volumes in a batch
storage Elastic San Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md
Last updated 02/22/2023 -+ # Delete an Elastic SAN Preview
storage Elastic San Expand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-expand.md
Last updated 02/22/2023 -+ # Increase the size of an Elastic SAN Preview
storage Files Smb Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-smb-protocol.md
description: Learn about file shares hosted in Azure Files using the Server Mess
Previously updated : 05/09/2022 Last updated : 03/31/2023
Azure Files exposes settings that let you toggle the SMB protocol to be more com
Azure Files exposes the following settings: - **SMB versions**: Which versions of SMB are allowed. Supported protocol versions are SMB 3.1.1, SMB 3.0, and SMB 2.1. By default, all SMB versions are allowed, although SMB 2.1 is disallowed if "require secure transfer" is enabled, because SMB 2.1 does not support encryption in transit.-- **Authentication methods**: Which SMB authentication methods are allowed. Supported authentication methods are NTLMv2 and Kerberos. By default, all authentication methods are allowed. Removing NTLMv2 disallows using the storage account key to mount the Azure file share.
+- **Authentication methods**: Which SMB authentication methods are allowed. Supported authentication methods are NTLMv2 (storage account key only) and Kerberos. By default, all authentication methods are allowed. Removing NTLMv2 disallows using the storage account key to mount the Azure file share. Azure Files doesn't support using NTLM authentication for domain credentials.
- **Kerberos ticket encryption**: Which encryption algorithms are allowed. Supported encryption algorithms are AES-256 (recommended) and RC4-HMAC. - **SMB channel encryption**: Which SMB channel encryption algorithms are allowed. Supported encryption algorithms are AES-256-GCM, AES-128-GCM, and AES-128-CCM.
storage Files Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-performance.md
An I/O depth of greater than 1 isn't supported on older versions of CentOS Linux
#### Workaround -- Upgrade to CentOS Linux 8.2+ or RHEL 8.2+.
+- Upgrade to CentOS Linux 8.6+ or RHEL 8.6+.
- Change to Ubuntu. - For other Linux VMs, upgrade the kernel to 5.0 or later.
storage Files Troubleshoot Smb Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-authentication.md
description: Troubleshoot problems using identity-based authentication to connec
Previously updated : 03/28/2023 Last updated : 03/31/2023
If you're connecting to a storage account via a private endpoint/private link us
#### Cause
-This is because the SMB client has tried to use Kerberos but failed, so it falls back to using NTLM authentication, which Azure Files doesn't support. The client can't get a Kerberos ticket to the storage account because the private link FQDN isn't registered to any existing Azure AD application.
+This is because the SMB client has tried to use Kerberos but failed, so it falls back to using NTLM authentication, and Azure Files doesn't support using NTLM authentication for domain credentials. The client can't get a Kerberos ticket to the storage account because the private link FQDN isn't registered to any existing Azure AD application.
#### Solution
storage Storage Files Active Directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md
The following diagram depicts on-premises AD DS authentication to Azure file sha
:::image type="content" source="media/storage-files-active-directory-overview/files-ad-ds-auth-diagram.png" alt-text="Diagram that depicts on-premises AD DS authentication to Azure file shares over SMB.":::
-To learn how to enable AD DS authentication, first read [Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md) and then see [Enable on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md).
+To learn how to enable AD DS authentication, first read [Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares](storage-files-identity-auth-active-directory-enable.md) and then see [Enable AD DS authentication for Azure file shares](storage-files-identity-ad-ds-enable.md).
### Azure AD DS
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
Azure file share scale targets apply at the file share level.
<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names. ### File scale targets
-File scale targets apply to individual files stored in Azure file shares. Soft limits and throttling can occur beyond these limits.
+File scale targets apply to individual files stored in Azure file shares.
| Attribute | Files in standard file shares | Files in premium file shares | |-|-|-|
File scale targets apply to individual files stored in Azure file shares. Soft l
| Maximum egress for a file | 60 MiB/sec | 300 MiB/sec (Up to 1 GiB/s with SMB Multichannel)<sup>2</sup> | | Maximum concurrent handles per file, directory, and share root<sup>3</sup> | 2,000 handles | 2,000 handles |
-<sup>1 Applies to read and write I/Os (typically smaller I/O sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower.</sup>
+<sup>1 Applies to read and write I/Os (typically smaller I/O sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower. These are soft limits, and throttling can occur beyond these limits.</sup>
<sup>2 Subject to machine network limits, available bandwidth, I/O sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./storage-files-smb-multichannel-performance.md).</sup>
storage Storage Blobs Container Calculate Billing Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-billing-size-powershell.md
ms.devlang: powershell+ Last updated 12/29/2020
Following is the breakdown:
- For more information about the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/). -- You can find additional Storage PowerShell script samples in [PowerShell samples for Azure Storage](../blobs/storage-samples-blobs-powershell.md).
+- You can find additional Storage PowerShell script samples in [PowerShell samples for Azure Storage](../blobs/storage-samples-blobs-powershell.md).
storsimple Storsimple 8000 Automation Azurerm Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-automation-azurerm-scripts.md
Title: Use AzureRM PowerShell scripts to manage StorSimple devices
description: Learn how to use Azure Resource Manager SDK-based scripts to manage your StorSimple 8000 series device. + Last updated 08/18/2022
PS C:\Scripts\StorSimpleSDKTools>
## Next steps
-[Use StorSimple Device Manager service to manage your StorSimple device](storsimple-8000-manager-service-administration.md).
+[Use StorSimple Device Manager service to manage your StorSimple device](storsimple-8000-manager-service-administration.md).
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md
ms.assetid: 7144d218-db21-4495-88fb-e3b24bbe45d1
NA+ Last updated 08/22/2022 - # StorSimple 8000 series: a hybrid cloud storage solution
storsimple Storsimple Update Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-update-device.md
ms.assetid: 786059f5-2a38-4105-941d-0860ce4ac515
NA+ Last updated 01/23/2018 - # Update your StorSimple 8000 Series device > [!NOTE]
If a device is reset to factory settings, then all the updates are lost. After t
## Next steps * Learn more about [using Windows PowerShell for StorSimple to administer your StorSimple device](./storsimple-8000-windows-powershell-administration.md).
-* Learn more about [using the StorSimple Manager service to administer your StorSimple device](./storsimple-8000-manager-service-administration.md).
+* Learn more about [using the StorSimple Manager service to administer your StorSimple device](./storsimple-8000-manager-service-administration.md).
stream-analytics Blob Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-output-managed-identity.md
Last updated 09/16/2022-+ # Use Managed Identity to authenticate your Azure Stream Analytics job to Azure Blob Storage
stream-analytics Move Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/move-cluster.md
-+ Last updated 02/20/2022 # Move Azure Stream Analytics cluster using Azure PowerShell
For more information on how to deploy a template using Azure PowerShell, see [De
## Next steps - [Quickstart: Create an Azure Stream Analytics cluster](create-cluster.md).-- [Quickstart: Create a Stream Analytics job by using Azure portal](stream-analytics-quick-create-portal.md).
+- [Quickstart: Create a Stream Analytics job by using Azure portal](stream-analytics-quick-create-portal.md).
stream-analytics Powerbi Output Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/powerbi-output-managed-identity.md
Title: Use Managed Identity to authenticate your Azure Stream Analytics job to Power BI output description: This article describes how to use managed identities to authenticate your Azure Stream Analytics job to Power BI output. +
stream-analytics Quick Create Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-resource-manager.md
-+ Last updated 05/28/2020
stream-analytics Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-bicep.md
description: This quickstart shows how to use Bicep to create an Azure Stream An
-+ Last updated 05/17/2022
stream-analytics Resource Manager Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/resource-manager-export.md
+ Last updated 12/12/2022
synapse-analytics Data Explorer Ingest Event Hub Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-explorer/ingest-data/data-explorer-ingest-event-hub-resource-manager.md
+ # Create an Event Hub data connection for Azure Synapse Data Explorer by using Azure Resource Manager template (Preview)
synapse-analytics Quickstart Deployment Template Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-deployment-template-workspaces.md
-+ Last updated 02/04/2022
To learn more about Azure Synapse Analytics and Azure Resource Manager,
- [Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md) Next, you can [create SQL pools](quickstart-create-sql-pool-studio.md) or [create Apache Spark pools](quickstart-create-apache-spark-pool-studio.md) to start analyzing and exploring your data.-
synapse-analytics Gateway Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/gateway-ip-addresses.md
+
+ Title: Gateway IP addresses
+description: An article that teaches you what are the IP addresses used in different regions.
+++ Last updated : 03/23/2023 +++++
+# Gateway IP addresses
+
+The table below lists the individual Gateway IP addresses and also Gateway IP address ranges per region.
+
+Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the process outlined at [Azure SQL Database traffic migration to newer Gateways](https://learn.microsoft.com/azure/azure-sql/database/gateway-migration?view=azuresql&tabs=in-progress-ip). We strongly encourage customers to use the **Gateway IP address subnets** in order to not be impacted by this activity in a region.
+
+> [!IMPORTANT]
+> - Logins for SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse can land on **any of the Gateways in a region**. For consistent connectivity to SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse, allow network traffic to and from **ALL** Gateway IP addresses and Gateway IP address subnets for the region.
+> - Use the Gateway IP addresses in this section if you're using a Proxy connection policy to connect to Azure SQL Database. If you're using the Redirect connection policy, refer to the [Azure IP Ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519) for a list of your region's IP addresses to allow.
+
+| Region name | Gateway IP addresses | Gateway IP address subnets |
+| | | |
+| Australia Central | 20.36.105.0, 20.36.104.6, 20.36.104.7 | 20.36.105.32/29 |
+| Australia Central 2 | 20.36.113.0, 20.36.112.6 | 20.36.113.32/29 |
+| Australia East | 13.75.149.87, 40.79.161.1, 13.70.112.9 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 |
+| Australia Southeast | 191.239.192.109, 13.73.109.251, 13.77.48.10, 13.77.49.32 | 13.77.49.32/29 |
+| Brazil South | 191.233.200.14, 191.234.144.16, 191.234.152.3 | 191.233.200.32/29, 191.234.144.32/29 |
+| Canada Central | 40.85.224.249, 52.246.152.0, 20.38.144.1 | 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29 |
+| Canada East | 40.86.226.166, 52.242.30.154, 40.69.105.9 , 40.69.105.10 | 40.69.105.32/29|
+| Central US | 13.67.215.62, 52.182.137.15, 104.208.21.1, 13.89.169.20 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29 |
+| China East | 139.219.130.35 | 52.130.112.136/29 |
+| China East 2 | 40.73.82.1 | 52.130.120.88/29 |
+| China North | 139.219.15.17 | 52.130.128.88/29 |
+| China North 2 | 40.73.50.0 | 52.130.40.64/29 |
+| East Asia | 52.175.33.150, 13.75.32.4, 13.75.32.14, 20.205.77.200, 20.205.83.224 | 13.75.32.192/29, 13.75.33.192/29 |
+| East US | 40.121.158.30, 40.79.153.12, 40.78.225.32 | 20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29 |
+| East US 2 | 40.79.84.180, 52.177.185.181, 52.167.104.0, 191.239.224.107, 104.208.150.3, 40.70.144.193 | 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29 |
+| France Central | 40.79.137.0, 40.79.129.1, 40.79.137.8, 40.79.145.12 | 40.79.136.32/29, 40.79.144.32/29 |
+| France South | 40.79.177.0, 40.79.177.10 ,40.79.177.12 | 40.79.176.40/29, 40.79.177.32/29 |
+| Germany West Central | 51.116.240.0, 51.116.248.0, 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29 |
+| Central India | 104.211.96.159, 104.211.86.30 , 104.211.86.31, 40.80.48.32, 20.192.96.32 | 104.211.86.32/29, 20.192.96.32/29 |
+| South India | 104.211.224.146 | 40.78.192.32/29, 40.78.193.32/29 |
+| West India | 104.211.160.80, 104.211.144.4 | 104.211.144.32/29, 104.211.145.32/29 |
+| Japan East | 13.78.61.196, 40.79.184.8, 13.78.106.224, 40.79.192.5, 13.78.104.32, 40.79.184.32 | 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
+| Japan West | 104.214.148.156, 40.74.100.192, 40.74.97.10 | 40.74.96.32/29 |
+| Korea Central | 52.231.32.42, 52.231.17.22 ,52.231.17.23, 20.44.24.32, 20.194.64.33 | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 |
+| Korea South | 52.231.200.86, 52.231.151.96 | |
+| North Central US | 23.96.178.199, 23.98.55.75, 52.162.104.33, 52.162.105.9 | 52.162.105.192/29 |
+| North Europe | 40.113.93.91, 52.138.224.1, 13.74.104.113 | 13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 |
+| Norway East | 51.120.96.0, 51.120.96.33, 51.120.104.32, 51.120.208.32 | 51.120.96.32/29 |
+| Norway West | 51.120.216.0 | 51.120.217.32/29 |
+| South Africa North | 102.133.152.0, 102.133.120.2, 102.133.152.32 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29|
+| South Africa West | 102.133.24.0 | 102.133.25.32/29 |
+| South Central US | 13.66.62.124, 104.214.16.32, 20.45.121.1, 20.49.88.1 | 20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29 |
+| South East Asia | 104.43.15.0, 40.78.232.3, 13.67.16.193 | 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29|
+| Switzerland North | 51.107.56.0, 51.107.57.0 | 51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27 |
+| Switzerland West | 51.107.152.0, 51.107.153.0 | 51.107.153.32/29 |
+| UAE Central | 20.37.72.64 | 20.37.72.96/29, 20.37.73.96/29 |
+| UAE North | 65.52.248.0 | 40.120.72.32/29, 65.52.248.32/29 |
+| UK South | 51.140.184.11, 51.105.64.0, 51.140.144.36, 51.105.72.32 | 51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29 |
+| UK West | 51.141.8.11, 51.140.208.96, 51.140.208.97 | 51.140.208.96/29, 51.140.209.32/29 |
+| West Central US | 13.78.145.25, 13.78.248.43, 13.71.193.32, 13.71.193.33 | 13.71.193.32/29 |
+| West Europe | 40.68.37.158, 104.40.168.105, 52.236.184.163 | 104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29 |
+| West US | 104.42.238.205, 13.86.216.196 | 13.86.217.224/29 |
+| West US 2 | 13.66.226.202, 40.78.240.8, 40.78.248.10 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29 |
+| West US 3 | 20.150.168.0, 20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
synapse-analytics Apache Spark Azure Portal Add Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
Last updated 02/20/2023 -+ # Manage libraries for Apache Spark in Azure Synapse Analytics
synapse-analytics Manage Compute With Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/manage-compute-with-azure-functions.md
Last updated 04/27/2018 -+ # Use Azure Functions to manage compute resources for your dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
synapse-analytics Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-arm-template.md
Last updated 06/09/2020-+ # Quickstart: Create an Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) by using an ARM template
synapse-analytics Quickstart Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bicep.md
Last updated 05/20/2022-+ # Quickstart: Create an Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) using Bicep
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
There are some limitations that you might see in Delta Lake support in serverles
- Serverless SQL pools don't support time travel queries. Use Apache Spark pools in Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel). - Serverless SQL pools don't support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Synapse Analytics to [update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data). - You can't [store query results to storage in Delta Lake format](create-external-table-as-select.md) by using the CETAS command. The CETAS command supports only Parquet and CSV as the output formats.
+- Serverless SQL pools in Synapse Analytics are compatible with Delta reader version 1. The Delta features that require Delta readers with version 2 or higher (for example [column mapping](https://github.com/delta-io/delt#reader-requirements-for-column-mapping)) are not supported in the serverless SQL pools.
- Serverless SQL pools in Synapse Analytics don't support the datasets with the [BLOOM filter](/azure/databricks/optimizations/bloom-filters). The serverless SQL pool ignores the BLOOM filters. - Delta Lake support isn't available in dedicated SQL pools. Make sure that you use serverless SQL pools to query Delta Lake files. - For more information about known issues with serverless SQL pools, see [Azure Synapse Analytics known issues](../known-issues.md).
There are some limitations that you might see in Delta Lake support in serverles
The serverless SQL pool does not support querying Delta Lake tables with the [renamed columns](https://docs.delta.io/latest/delta-batch.html#rename-columns). Serverless SQL pool cannot read data from the renamed column.
+### The value of a column in the Delta table is NULL
+
+If you are using Delta data set that requires a Delta reader version 2 or higher, and uses the features that are unsupported in version 1 (for example - renaming columns, dropping columns, or column mapping), the values in the referenced columns might not be shown.
+ ### JSON text isn't properly formatted This error indicates that serverless SQL pool can't read the Delta Lake transaction log. You'll probably see the following error:
time-series-insights Time Series Insights Manage Resources Using Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-resources-using-azure-resource-manager-template.md
ms.devlang: csharp
Last updated 09/30/2020-+ # Create Azure Time Series Insights Gen 1 resources using Azure Resource Manager templates
traffic-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/cli-samples.md
Title: Azure CLI Samples for Traffic Manager| Microsoft Docs
+ Title: Azure CLI Samples for Traffic Manager
description: Learn about an Azure CLI script you can use to direct traffic across multiple regions for high application availability.-+ -+ Last updated 10/23/2018 --+ + # Azure CLI samples for Traffic Manager The following table includes links to bash scripts for Traffic Manager built using the Azure CLI.
The following table includes links to bash scripts for Traffic Manager built usi
|Title |Description | ||| |[Direct traffic across multiple regions for high application availability](./scripts/traffic-manager-cli-websites-high-availability.md) | Creates two app service plans, two web apps, a traffic manager profile, and two traffic manager endpoints. |
-| | |
traffic-manager Configure Multivalue Routing Method Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/configure-multivalue-routing-method-template.md
Last updated 04/28/2022-+ # Configure the Multivalue routing method using an ARM Template
traffic-manager Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/powershell-samples.md
Title: Azure PowerShell samples for Traffic Manager| Microsoft Docs
+ Title: Azure PowerShell samples for Traffic Manager
description: With this sample, use Azure PowerShell to deploy and configure Azure Traffic Manager. documentationcenter: traffic-manager -+ Last updated 10/23/2018 -+ + # Azure PowerShell samples for Traffic Manager The following table includes links to Traffic Manager scripts built using Azure PowerShell.
The following table includes links to Traffic Manager scripts built using Azure
|Title |Description | ||| |[Direct traffic across multiple regions for high application availability](./scripts/traffic-manager-powershell-websites-high-availability.md) | Creates two app service plans, two web apps, a traffic manager profile, and two traffic manager endpoints. |
-| | |
-
traffic-manager Quickstart Create Traffic Manager Profile Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-bicep.md
Last updated 02/19/2023 -+ # Quickstart: Create a Traffic Manager profile using Bicep
traffic-manager Quickstart Create Traffic Manager Profile Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-cli.md
Last updated 02/18/2023 -+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
traffic-manager Quickstart Create Traffic Manager Profile Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-powershell.md
Last updated 02/18/2023
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
traffic-manager Quickstart Create Traffic Manager Profile Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-template.md
Last updated 02/19/2023 -+ # Quickstart: Create a Traffic Manager profile using an ARM template
traffic-manager Quickstart Create Traffic Manager Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile.md
Last updated 02/18/2023
-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
traffic-manager Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-cli-websites-high-availability.md
documentationcenter: traffic-manager
tags: azure-infrastructure+ ms.assetid: ms.devlang: azurecli
traffic-manager Traffic Manager Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-FAQs.md
Title: Azure Traffic Manager - FAQs
-description: This article provides answers to frequently asked questions about Traffic Manager
+ Title: Azure Traffic Manager - FAQ
+description: This article provides answers to frequently asked questions about Traffic Manager.
-+ Last updated 11/30/2022
traffic-manager Traffic Manager Configure Geographic Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-geographic-routing-method.md
Last updated 10/15/2020 + # Tutorial: Configure the geographic traffic routing method using Traffic Manager
traffic-manager Traffic Manager Configure Multivalue Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-multivalue-routing-method.md
Title: Configure multivalue traffic routing - Azure Traffic Manager description: This article explains how to configure Traffic Manager to route traffic to A/AAAA endpoints. Last updated 09/10/2018 + # Configure MultiValue routing method in Traffic Manager
traffic-manager Traffic Manager Configure Performance Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-performance-routing-method.md
Title: Configure performance traffic routing method using Azure Traffic Manager
description: This article explains how to configure Traffic Manager to route traffic to the endpoint with lowest latency Last updated 03/20/2017 + # Configure the performance traffic routing method
traffic-manager Traffic Manager Configure Priority Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-priority-routing-method.md
Title: 'Tutorial: Configure priority traffic routing with Azure Traffic Manager' description: This tutorial explains how to configure the priority traffic routing method in Traffic Manager Last updated 10/16/2020 + # Tutorial: Configure priority traffic routing method in Traffic Manager
traffic-manager Traffic Manager Configure Subnet Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-subnet-routing-method.md
Title: Configure subnet traffic routing - Azure Traffic Manager description: This article explains how to configure Traffic Manager to route traffic from specific subnets. - Last updated 09/17/2018 + # Direct traffic to specific endpoints based on user subnet using Traffic Manager
traffic-manager Traffic Manager Configure Weighted Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-weighted-routing-method.md
Title: 'Tutorial: Configure weighted round-robin traffic routing with Azure Traffic Manager' description: This tutorial explains how to load balance traffic using a round-robin method in Traffic Manager Last updated 10/19/2020 + # Tutorial: Configure the weighted traffic routing method in Traffic Manager
traffic-manager Traffic Manager Create Rum Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-create-rum-visual-studio.md
description: Set up your mobile application developed using Visual Studio Mobile
documentationcenter: traffic-manager - ms.devlang: java Last updated 03/16/2018 -+ # How to send Real User Measurements to Traffic Manager with Visual Studio Mobile Center
traffic-manager Traffic Manager Create Rum Web Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-create-rum-web-pages.md
Last updated 04/06/2021 + # How to send Real User Measurements to Azure Traffic Manager using web pages
traffic-manager Traffic Manager Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-diagnostic-logs.md
Title: Enable resource logging in Azure Traffic Manager
description: Learn how to enable resource logging for your Traffic Manager profile and access the log files that are created as a result. - Last updated 01/25/2019 + # Enable resource logging in Azure Traffic Manager
traffic-manager Traffic Manager Geographic Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-geographic-regions.md
Title: Country/Region hierarchy used by geographic routing - Azure Traffic Manager
-description: This article lists Country/Region hierarchy used by Azure Traffic Manager Geographic routing type
+description: This article lists Country/Region hierarchy used by Azure Traffic Manager Geographic routing type.
-+ Last updated 03/22/2017 + # Country/Region hierarchy used by Azure Traffic Manager for geographic traffic routing method
traffic-manager Traffic Manager How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-how-it-works.md
Title: How Azure Traffic Manager works | Microsoft Docs
-description: This article will help you understand how Traffic Manager routes traffic for high performance and availability of your web applications
+ Title: How Azure Traffic Manager works
+description: This article will help you understand how Traffic Manager routes traffic for high performance and availability of your web applications.
-+ Last updated 02/27/2023 + # How Traffic Manager Works
traffic-manager Traffic Manager Load Balancing Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-load-balancing-azure.md
Title: Using load-balancing services in Azure | Microsoft Docs
+ Title: Using load-balancing services in Azure
description: 'This tutorial shows you how to create a scenario by using the Azure load-balancing portfolio: Traffic Manager, Application Gateway, and Load Balancer.' - Last updated 10/27/2016 + # Using load-balancing services in Azure
traffic-manager Traffic Manager Manage Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-manage-endpoints.md
Title: Manage endpoints in Azure Traffic Manager | Microsoft Docs
+ Title: Manage endpoints in Azure Traffic Manager
description: This article will help you add, remove, enable and disable endpoints from Azure Traffic Manager. Last updated 05/08/2017 + # Add, disable, enable, or delete endpoints
traffic-manager Traffic Manager Manage Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-manage-profiles.md
Title: Manage Azure Traffic Manager profiles | Microsoft Docs
+ Title: Manage Azure Traffic Manager profiles
description: This article helps you create, disable, enable, and delete an Azure Traffic Manager profile. Last updated 05/10/2017 + # Manage an Azure Traffic Manager profile
traffic-manager Traffic Manager Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-metrics-alerts.md
description: In this article, learn the metrics and alerts available for Traffic
-+ Last updated 06/11/2018 + # Traffic Manager metrics and alerts
traffic-manager Traffic Manager Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-monitoring.md
Title: Azure Traffic Manager endpoint monitoring
-description: This article can help you understand how Traffic Manager uses endpoint monitoring and automatic endpoint failover to help Azure customers deploy high-availability applications
+description: Learn how Traffic Manager uses endpoint monitoring and automatic endpoint failover to help Azure customers deploy high-availability applications.
-+ Previously updated : 11/30/2021 Last updated : 03/31/2023 + # Traffic Manager endpoint monitoring
Azure Traffic Manager includes built-in endpoint monitoring and automatic endpoi
To configure endpoint monitoring, you must specify the following settings on your Traffic Manager profile:
-* **Protocol**. Choose HTTP, HTTPS, or TCP as the protocol that Traffic Manager uses when probing your endpoint to check its health. HTTPS monitoring doesn't verify whether your TLS/SSL certificate is valid--it only checks that the certificate is present.
+* **Protocol**. Choose HTTP, HTTPS, or TCP as the protocol that Traffic Manager uses when probing your endpoint to check its health. HTTPS monitoring doesn't verify whether your TLS/SSL certificate is valid, it only checks that the certificate is present.
* **Port**. Choose the port used for the request.
-* **Path**. This configuration setting is valid only for the HTTP and HTTPS protocols, for which specifying the path setting is required. Providing this setting for the TCP monitoring protocol results in an error. For HTTP and HTTPS protocol, give the relative path and the name of the webpage or the file that the monitoring accesses. A forward slash (/) is a valid entry for the relative path. This value implies that the file is in the root directory (default).
-* **Custom header settings**. This configuration setting helps you add specific HTTP headers to the health checks that Traffic Manager sends to endpoints under a profile. The custom headers can be specified at a profile level to be applicable for all endpoints in that profile and / or at an endpoint level applicable only to that endpoint. You can use custom headers for health checks of endpoints in a multi-tenant environment. That way it can be routed correctly to their destination by specifying a host header. You can also use this setting by adding unique headers that can be used to identify Traffic Manager originated HTTP(S) requests and processes them differently. You can specify up to eight header:value pairs separated by a comma. For example, "header1:value1, header2:value2".
+* **Path**. This configuration setting is valid only for the HTTP and HTTPS protocols, for which specifying the path setting is required. Providing this setting for the TCP monitoring protocol results in an error. For HTTP and HTTPS protocol, give the relative path and the name of the webpage or the file that the monitoring accesses. A forward slash `/` is a valid entry for the relative path. This value implies that the file is in the root directory (default).
+* **Custom header settings**. This configuration setting helps you add specific HTTP headers to the health checks that Traffic Manager sends to endpoints under a profile. The custom headers can be specified at a profile level to be applicable for all endpoints in that profile and/or at an endpoint level applicable only to that endpoint. You can use custom headers for health checks of endpoints in a multi-tenant environment. That way, it can be routed correctly to their destination by specifying a host header. You can also use this setting by adding unique headers that can be used to identify Traffic Manager originated HTTP(S) requests and processes them differently. You can specify up to eight `header:value` pairs separated by a comma. For example, `header1:value1, header2:value2`.
- > NOTE: Using asterisk characters (\*) in custom `Host` headers is unsupported.
+> [!NOTE]
+> Using asterisk characters (\*) in custom `Host` headers is unsupported.
* **Expected status code ranges**. This setting allows you to specify multiple success code ranges in the format 200-299, 301-301. If these status codes are received as response from an endpoint when a health check is done, Traffic Manager marks those endpoints as healthy. You can specify a maximum of eight status code ranges. This setting is applicable only to HTTP and HTTPS protocol and to all endpoints. This setting is at the Traffic Manager profile level and by default the value 200 is defined as the success status code.
-* **Probing interval**. This value specifies how often an endpoint is checked for its health from a Traffic Manager probing agent. You can specify two values here: 30 seconds (normal probing) and 10 seconds (fast probing). If no values are provided, the profile sets to a default value of 30 seconds. Visit the [Traffic Manager Pricing](https://azure.microsoft.com/pricing/details/traffic-manager) page to learn more about fast probing pricing.
+* **Probing interval**. This value specifies how often an endpoint is checked for its health from a Traffic Manager probing agent. You can specify two values here: 30 seconds (normal probing) and 10 seconds (fast probing). If no values are provided, the profile sets to a default value of 30 seconds. Visit the [Traffic Manager pricing](https://azure.microsoft.com/pricing/details/traffic-manager) page to learn more about fast probing pricing.
* **Tolerated number of failures**. This value specifies how many failures a Traffic Manager probing agent tolerates before marking that endpoint as unhealthy. Its value can range between 0 and 9. A value of 0 means a single monitoring failure can cause that endpoint to be marked as unhealthy. If no value is specified, it uses the default value of 3. * **Probe timeout**. This property specifies the amount of time the Traffic Manager probing agent should wait before considering a health probe check to an endpoint a failure. If the Probing Interval is set to 30 seconds, then you can set the Timeout value between 5 and 10 seconds. If no value is specified, it uses a default value of 10 seconds. If the Probing Interval is set to 10 seconds, then you can set the Timeout value between 5 and 9 seconds. If no Timeout value is specified, it uses a default value of 9 seconds.
- ![Traffic Manager endpoint monitoring](./media/traffic-manager-monitoring/endpoint-monitoring-settings.png)
+ :::image type="content" source="./media/traffic-manager-monitoring/endpoint-monitoring-settings-inline.png" alt-text="Screenshot showing Traffic Manager configuration in the Azure portal." lightbox="./media/traffic-manager-monitoring/endpoint-monitoring-settings-expanded.png":::
- **Figure: Traffic Manager endpoint monitoring**
+ **Figure: Traffic Manager endpoint monitoring**
## How endpoint monitoring works
-When the monitoring protocol is set as HTTP or HTTPS, the Traffic Manager probing agent makes a GET request to the endpoint using the protocol, port, and relative path given. An endpoint is considered healthy if probing agent receives a 200-OK response, or any of the responses configured in the **Expected status code ranges**. If the response is a different value or no response get received within the timeout period, the Traffic Manager probing agent reattempts according to the Tolerated Number of Failures setting. No reattempts are done if this setting is 0. The endpoint is marked unhealthy if the number of consecutive failures is higher than the Tolerated Number of Failures setting.
+When the monitoring protocol is set as HTTP or HTTPS, the Traffic Manager probing agent makes a GET request to the endpoint using the protocol, port, and relative path given. An endpoint is considered healthy if probing agent receives a 200-OK response, or any of the responses configured in the **Expected status code ranges**. If the response is a different value or no response get received within the timeout period, the Traffic Manager probing agent reattempts according to the Tolerated Number of Failures setting. No reattempts are done if this setting is 0. The endpoint is marked unhealthy if the number of consecutive failures is higher than the **Tolerated number of failures** setting.
-When the monitoring protocol is TCP, the Traffic Manager probing agent creates a TCP connection request using the port specified. If the endpoint responds to the request with a response to establish the connection, that health check is marked as a success. The Traffic Manager probing agent resets the TCP connection. In cases where the response is a different value or no response get received within the timeout period, the Traffic Manager probing agent reattempts according to the Tolerated Number of Failures setting. No reattempts are made if this setting is 0. If the number of consecutive failures is higher than the Tolerated Number of Failures setting, then that endpoint is marked unhealthy.
+When the monitoring protocol is TCP, the Traffic Manager probing agent creates a TCP connection request using the port specified. If the endpoint responds to the request with a response to establish the connection, that health check is marked as a success. The Traffic Manager probing agent resets the TCP connection. In cases where the response is a different value or no response get received within the timeout period, the Traffic Manager probing agent reattempts according to the **Tolerated number of failures** setting. No reattempts are made if this setting is 0. If the number of consecutive failures is higher than the **Tolerated number of failures** setting, then that endpoint is marked unhealthy.
In all cases, Traffic Manager probes from multiple locations. The consecutive failure determines what happen within each region. That's why endpoints are receiving health probes from Traffic Manager with a higher frequency than the setting used for Probing Interval.
->[!NOTE]
->For HTTP or HTTPS monitoring protocol, a common practice on the endpoint side is to implement a custom page within your application - for example, /health.aspx. Using this path for monitoring, you can perform application-specific checks, such as checking performance counters or verifying database availability. Based on these custom checks, the page returns an appropriate HTTP status code.
+> [!NOTE]
+> For HTTP or HTTPS monitoring protocol, a common practice on the endpoint side is to implement a custom page within your application - for example, /health.aspx. Using this path for monitoring, you can perform application-specific checks, such as checking performance counters or verifying database availability. Based on these custom checks, the page returns an appropriate HTTP status code.
All endpoints in a Traffic Manager profile share monitoring settings. If you need to use different monitoring settings for different endpoints, you can create [nested Traffic Manager profiles](traffic-manager-nested-profiles.md#example-5-per-endpoint-monitoring-settings).
You can enable and disable Traffic Manager profiles and endpoints. However, a ch
### Endpoint status
-You can enable or disable a specific endpoint. The underlying service, which might still be healthy, is unaffected. Changing the endpoint status controls the availability of the endpoint in the Traffic Manager profile. When an endpoint status is disabled, Traffic Manager doesn't check its health and the endpoint isn't included in a DNS response.
+You can enable or disable a specific endpoint. The underlying service, which might still be healthy, is unaffected. Changing the endpoint status controls the availability of the endpoint in the Traffic Manager profile. When an endpoint status is disabled, Traffic Manager doesn't check its health, and the endpoint isn't included in a DNS response.
### Profile status
-Using the profile status setting, you can enable or disable a specific profile. While endpoint status affects a single endpoint, profile status affects the entire profile, including all endpoints. When you disable a profile, the endpoints aren't checked for health and no endpoints are included in a DNS response. An [NXDOMAIN](https://tools.ietf.org/html/rfc2308) response code is returned for the DNS query.
+Using the profile status setting, you can enable or disable a specific profile. While endpoint status affects a single endpoint, profile status affects the entire profile, including all endpoints. When you disable a profile, the endpoints aren't checked for health, and no endpoints are included in a DNS response. An [NXDOMAIN](https://tools.ietf.org/html/rfc2308) response code is returned for the DNS query.
### Endpoint monitor status
Endpoint monitor status is a Traffic Manager-generated value that shows the stat
| Disabled |Enabled |Inactive |The profile has been disabled. Although the endpoint status is Enabled, the profile status (Disabled) takes precedence. Endpoints in disabled profiles aren't monitored. An NXDOMAIN response code is returned for the DNS query. | | &lt;any&gt; |Disabled |Disabled |The endpoint has been disabled. Disabled endpoints aren't monitored. The endpoint isn't included in DNS responses, as such it doesn't receive traffic. | | Enabled |Enabled |Online |The endpoint is monitored and is healthy. It's included in DNS responses and can receive traffic. |
-| Enabled |Enabled |Degraded |Endpoint monitoring health checks are failing. The endpoint isn't included in DNS responses and doesn't receive traffic. <br>An exception is if all endpoints are degraded. In which case all of them are considered to be returned in the query response).</br>|
+| Enabled |Enabled |Degraded |Endpoint monitoring health checks are failing. The endpoint isn't included in DNS responses and doesn't receive traffic. <br>An exception is if all endpoints are degraded. In which case all of them are considered to be returned in the query response. |
| Enabled |Enabled |CheckingEndpoint |The endpoint is monitored, but the results of the first probe haven't been received yet. CheckingEndpoint is a temporary state that usually occurs immediately after adding or enabling an endpoint in the profile. An endpoint in this state is included in DNS responses and can receive traffic. |
-| Enabled |Enabled |Stopped |The web app that the endpoint points to isn't running. Check the web app settings. This status can also happen if the endpoint is of type nested endpoint and the child profile get disabled or is inactive. <br>An endpoint with a Stopped status isn't monitored. It isn't included in DNS responses and doesn't receive traffic. An exception is if all endpoints are degraded. In which case all of them will be considered to be returned in the query response.</br>|
+| Enabled |Enabled |Stopped |The web app that the endpoint points to isn't running. Check the web app settings. This status can also happen if the endpoint is of type nested endpoint and the child profile get disabled or is inactive. <br>An endpoint with a Stopped status isn't monitored. It isn't included in DNS responses and doesn't receive traffic. An exception is if all endpoints are degraded. In which case all of them will be considered to be returned in the query response. |
-For details about how endpoint monitor status is calculated for nested endpoints, see [nested Traffic Manager profiles](traffic-manager-nested-profiles.md).
+For details about how endpoint monitor status is calculated for nested endpoints, see [Nested Traffic Manager profiles](traffic-manager-nested-profiles.md).
->[!NOTE]
+> [!NOTE]
> A Stopped Endpoint monitor status can happen on App Service if your web application is not running in the Standard tier or above. For more information, see [Traffic Manager integration with App Service](../app-service/web-sites-traffic-manager.md). ### Profile monitor status
An endpoint is unhealthy when any of the following events occur:
- Timeout. - Any other connection issue resulting in the endpoint being not reachable.
-For more information about troubleshooting failed checks, see [Troubleshooting Degraded status on Azure Traffic Manager](traffic-manager-troubleshooting-degraded.md).
+For more information about troubleshooting failed checks, see [Troubleshooting degraded status on Azure Traffic Manager](traffic-manager-troubleshooting-degraded.md).
The timeline in the following figure is a detailed description of the monitoring process of Traffic Manager endpoint that has the following settings:
The timeline in the following figure is a detailed description of the monitoring
* Timeout value is 10 seconds. * DNS TTL is 30 seconds.
-![Traffic Manager endpoint failover and failback sequence](./media/traffic-manager-monitoring/timeline.png)
- **Figure: Traffic manager endpoint failover and recovery sequence** 1. **GET**. For each endpoint, the Traffic Manager monitoring system does a GET request on the path specified in the monitoring settings.
When an endpoint has a Degraded status, it's no longer returned in response to D
* **Priority**. Endpoints form a prioritized list. The first available endpoint on the list is always returned. If an endpoint status is Degraded, then the next available endpoint is returned. * **Weighted**. Any available endpoints get chosen at random based on their assigned weights and the weights of the other available endpoints. * **Performance**. The endpoint closest to the end user is returned. If that endpoint is unavailable, Traffic Manager moves traffic to the endpoints in the next closest Azure region. You can configure alternative failover plans for performance traffic-routing by using [nested Traffic Manager profiles](traffic-manager-nested-profiles.md#example-4-controlling-performance-traffic-routing-between-multiple-endpoints-in-the-same-region).
-* **Geographic**. The endpoint mapped to serve the geographic location based on the query request IPΓÇÖs is returned. If that endpoint is unavailable, another endpoint won't be selected to fail over to, since a geographic location can be mapped only to one endpoint in a profile. (More details are in the [FAQ](traffic-manager-FAQs.md#traffic-manager-geographic-traffic-routing-method)). As a best practice, when using geographic routing, we recommend customers to use nested Traffic Manager profiles with more than one endpoint as the endpoints of the profile.
+* **Geographic**. The endpoint mapped to serve the geographic location based on the query request IPs is returned. If that endpoint is unavailable, another endpoint won't be selected to fail over to, since a geographic location can be mapped only to one endpoint in a profile. (More details are in the [FAQ](traffic-manager-FAQs.md#traffic-manager-geographic-traffic-routing-method)). As a best practice, when using geographic routing, we recommend customers to use nested Traffic Manager profiles with more than one endpoint as the endpoints of the profile.
* **MultiValue** Multiple endpoints mapped to IPv4/IPv6 addresses are returned. When a query is received for this profile, healthy endpoints are returned based on the **Maximum record count in response** value that you've specified. The default number of responses is two endpoints. * **Subnet** The endpoint mapped to a set of IP address ranges is returned. When a request is received from that IP address, the endpoint returned is the one mapped for that IP address. 
For more information, see [Traffic Manager traffic-routing methods](traffic-mana
> > The consequence of this behavior is that if Traffic Manager health checks are not configured correctly, it might appear from the traffic routing as though Traffic Manager *is* working properly. However, in this case, endpoint failover cannot happen which affects overall application availability. It is important to check that the profile shows an Online status, not a Degraded status. An Online status indicates that the Traffic Manager health checks are working as expected.
-For more information about troubleshooting failed health checks, see [Troubleshooting Degraded status on Azure Traffic Manager](traffic-manager-troubleshooting-degraded.md).
+For more information about troubleshooting failed health checks, see [Troubleshooting degraded status on Azure Traffic Manager](traffic-manager-troubleshooting-degraded.md).
-## FAQs
+## FAQ
* [Is Traffic Manager resilient to Azure region failures?](./traffic-manager-faqs.md#is-traffic-manager-resilient-to-azure-region-failures)
For more information about troubleshooting failed health checks, see [Troublesho
## Next steps
-Learn [how Traffic Manager works](traffic-manager-how-it-works.md)
-
-Learn more about the [traffic-routing methods](traffic-manager-routing-methods.md) supported by Traffic Manager
-
-Learn how to [create a Traffic Manager profile](traffic-manager-manage-profiles.md)
-
-[Troubleshoot Degraded status](traffic-manager-troubleshooting-degraded.md) on a Traffic Manager endpoint
+- Learn [how Traffic Manager works](traffic-manager-how-it-works.md)
+- Learn more about the [traffic-routing methods](traffic-manager-routing-methods.md) supported by Traffic Manager
+- Learn how to [create a Traffic Manager profile](traffic-manager-manage-profiles.md)
+- [Troubleshoot Degraded status](traffic-manager-troubleshooting-degraded.md) on a Traffic Manager endpoint
traffic-manager Traffic Manager Nested Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-nested-profiles.md
Title: Nested Traffic Manager Profiles in Azure
description: This article explains the 'Nested Profiles' feature of Azure Traffic Manager -+ Last updated 11/10/2022 + # Nested Traffic Manager profiles
traffic-manager Traffic Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-overview.md
Title: Azure Traffic Manager | Microsoft Docs
+ Title: Azure Traffic Manager
description: This article provides an overview of Azure Traffic Manager. Find out if it's the right choice for load-balancing user traffic for your application. Last updated 11/30/2022 + #Customer intent: As an IT admin, I want to learn about Traffic Manager and what I can use it for.
traffic-manager Traffic Manager Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-performance-considerations.md
Title: Performance considerations for Azure Traffic Manager | Microsoft Docs
+ Title: Performance considerations for Azure Traffic Manager
description: Understand performance on Traffic Manager and how to test performance of your website when using Traffic Manager -+ Last updated 01/27/2023 + # Performance considerations for Traffic Manager
traffic-manager Traffic Manager Point Internet Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-point-internet-domain.md
description: This article will help you point your company domain name to a Traf
-+ Last updated 10/11/2016 + # Point a company Internet domain to an Azure Traffic Manager domain
traffic-manager Traffic Manager Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-powershell-arm.md
Title: Using PowerShell to manage Traffic Manager in Azure description: With this learning path, get started using Azure PowerShell for Traffic Manager. Last updated 03/16/2017 -+ # Using PowerShell to manage Traffic Manager
traffic-manager Traffic Manager Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-routing-methods.md
description: This article helps you understand the different traffic routing met
-+ Last updated 11/30/2022 + # Traffic Manager routing methods
traffic-manager Traffic Manager Rum Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-rum-overview.md
documentationcenter: traffic-manager -+ Last updated 03/16/2018 -+ # Traffic Manager Real User Measurements overview
traffic-manager Traffic Manager Subnet Override Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-subnet-override-cli.md
Title: Azure Traffic Manager subnet override using Azure CLI | Microsoft Docs
+ Title: Azure Traffic Manager subnet override using Azure CLI
description: This article will help you understand how Traffic Manager subnet override can be used to override the routing method of a Traffic Manager profile to direct traffic to an endpoint based upon the end-user IP address via predefined IP range to endpoint mappings. - Last updated 09/18/2019 + # Traffic Manager subnet override using Azure CLI
az network traffic-manager endpoint update \
Learn more about Traffic Manager [traffic routing methods](traffic-manager-routing-methods.md).
-Learn about the [Subnet traffic-routing method](./traffic-manager-routing-methods.md#subnet-traffic-routing-method)
+Learn about the [Subnet traffic-routing method](./traffic-manager-routing-methods.md#subnet-traffic-routing-method)
traffic-manager Traffic Manager Subnet Override Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-subnet-override-powershell.md
Title: Azure Traffic Manager subnet override using Azure PowerShell | Microsoft Docs
+ Title: Azure Traffic Manager subnet override using Azure PowerShell
description: This article will help you understand how Traffic Manager subnet override is used to override the routing method of a Traffic Manager profile to direct traffic to an endpoint based upon the end-user IP address via predefined IP range to endpoint mappings using Azure PowerShell. - - Last updated 09/18/2019 + # Traffic Manager subnet override using Azure PowerShell
traffic-manager Traffic Manager Testing Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-testing-settings.md
Last updated 03/16/2017 + # Verify Traffic Manager settings
traffic-manager Traffic Manager Traffic View Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-traffic-view-overview.md
documentationcenter: traffic-manager -+ Last updated 03/22/2023 -+ # Traffic Manager Traffic View
traffic-manager Traffic Manager Troubleshooting Degraded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-troubleshooting-degraded.md
Title: Troubleshooting degraded status on Azure Traffic Manager description: How to troubleshoot Traffic Manager profiles when it shows as degraded status. Last updated 05/03/2017
traffic-manager Tutorial Traffic Manager Improve Website Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/tutorial-traffic-manager-improve-website-response.md
Title: Tutorial - Improve website response with Azure Traffic Manager
+ Title: 'Tutorial: Improve website response with Azure Traffic Manager'
description: This tutorial article describes how to create a Traffic Manager profile to build a highly responsive website.
-# Customer intent: As an IT Admin, I want to route traffic so I can improve website response by choosing the endpoint with lowest latency.
Last updated 03/06/2023 +
+# Customer intent: As an IT Admin, I want to route traffic so I can improve website response by choosing the endpoint with lowest latency.
# Tutorial: Improve website response using Traffic Manager
In this section, you connect to the two VMs *myIISVMEastUS* and *myIISVMWestEuro
:::image type="content" source="./media/tutorial-traffic-manager-improve-website-response/connect-to-bastion-password.png" alt-text="Screenshot of connecting to virtual machine using bastion.":::
-To learn more about Azure Bastion, see [What is Azure Bastion?](/articles/bastion/bastion-overview.md)
+To learn more about Azure Bastion, see [What is Azure Bastion?](../bastion/bastion-overview.md)
#### Install IIS and customize the default web page In this section, you install the IIS server on the two VMs *myIISVMEastUS* and *myIISVMWestEurope*, and then update the default website page. The customized website page shows the name of the VM that you're connecting to when you visit the website from a web browser.
traffic-manager Tutorial Traffic Manager Subnet Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/tutorial-traffic-manager-subnet-routing.md
Title: Tutorial - Configure subnet traffic routing with Azure Traffic Manager
+ Title: 'Tutorial: Configure subnet traffic routing with Azure Traffic Manager'
description: This tutorial explains how to configure Traffic Manager to route traffic from user subnets to specific endpoints. Last updated 03/08/2021 + # Tutorial: Direct traffic to specific endpoints based on user subnet using Traffic Manager
traffic-manager Tutorial Traffic Manager Weighted Endpoint Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/tutorial-traffic-manager-weighted-endpoint-routing.md
Title: Tutorial:Route traffic to weighted endpoints - Azure Traffic Manager
+ Title: 'Tutorial: Route traffic to weighted endpoints - Azure Traffic Manager'
description: This tutorial article describes how to route traffic to weighted endpoints by using Traffic Manager.
-# Customer intent: As an IT Admin, I want to distribute traffic based on the weight assigned to a website endpoint so that I can control the user traffic to a given website.
Last updated 10/19/2020 +
+# Customer intent: As an IT Admin, I want to distribute traffic based on the weight assigned to a website endpoint so that I can control the user traffic to a given website.
# Tutorial: Control traffic routing with weighted endpoints by using Traffic Manager
update-center Manage Arc Enabled Servers Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-arc-enabled-servers-programmatically.md
description: This article tells how to use Update management center (preview) us
Previously updated : 02/20/2023 Last updated : 03/31/2023
Support for Azure REST API to manage Azure Arc-enabled servers is available thro
To trigger an update assessment on your Azure Arc-enabled server, specify the following POST request: ```rest
-POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/provider/Microsoft.HybridCompute/machines/machineName/assessPatches?api-version=2020-08-15-preview`
+POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.HybridCompute/machines/machineName/assessPatches?api-version=2020-08-15-preview`
{ } ```
POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/provider/
To specify the POST request, you can use the Azure CLI [az rest](/cli/azure/reference-index#az_rest) command. ```azurecli
-az rest --method post --url https://management.azure.com/subscriptions/subscriptionId/resourceGroups/resourceGroupName/provider/Microsoft.HybridCompute/machines/machineName/assessPatches?api-version=2020-08-15-preview --body @body.json
+az rest --method post --url https://management.azure.com/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.HybridCompute/machines/machineName/assessPatches?api-version=2020-08-15-preview --body @body.json
``` The format of the request body for version 2020-08-15 is as follows:
The format of the request body for version 2020-08-15 is as follows:
# [Azure PowerShell](#tab/powershell)
-To specify the POST request, you can use the Azure PowerShell [Invoke-AzRestMethod](/powershell/module/az.accounts/invoke-azrestmethod) cmdlet.
+To specify the POST request, you can use the Azure PowerShell [Invoke-AzRestMethod-Path](/powershell/module/az.accounts/invoke-azrestmethod) cmdlet.
```azurepowershell
-Invoke-AzRestMethod
- -ResourceGroupName resourceGroupName
- -Name "machineName"
- -ResourceProviderName "Microsoft.HybridCompute"
- -ResourceType "machines"
- -ApiVersion 2020-08-15-preview
- -Payload '{
- }'
- -Method POST
+Invoke-AzRestMethod-Path
+ "/subscriptions/subscriptionId/resourceGroups/resourcegroupname/providers/Microsoft.HybridCompute/machines/machinename/assessPatches?api-version=2020-08-15-preview"
+ -Payload '{}' -Method POST
```- ## Update deployment
Invoke-AzRestMethod
To trigger an update deployment to your Azure Arc-enabled server, specify the following POST request: ```rest
-POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/provider/Microsoft.HybridCompute/machines/machineName/installPatches?api-version=2020-08-15-preview`
+POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.HybridCompute/machines/machineName/installPatches?api-version=2020-08-15-preview`
``` #### Request body
To specify the POST request, you can use the Azure PowerShell [Invoke-AzRestMeth
```azurepowershell Invoke-AzRestMethod
- -ResourceGroupName resourceGroupName
- -Name "machineName"
- -ResourceProviderName "Microsoft.HybridCompute"
- -ResourceType "machines"
- -ApiVersion 2020-08-15-preview
- -Payload '{
+-Path "/subscriptions/subscriptionId/resourceGroups/resourcegroupname/providers/Microsoft.HybridCompute/machines/machinename/installPatches?api-version=2020-08-15-preview"
+-Payload '{
"maximumDuration": "PT120M", "rebootSetting": "IfRequired", "windowsParameters": {
update-center Manage Vms Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-vms-programmatically.md
description: This article tells how to use update management center (preview) in
Previously updated : 04/21/2022 Last updated : 03/31/2023
POST on `subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers
# [Azure CLI](#tab/cli)
-To specify the POST request, you can use the Azure CLI [az rest](/cli/azure/reference-index#az_rest) command.
+To specify the POST request, you can use the Azure CLI [az vm assess-patches](https://learn.microsoft.com/cli/azure/vm?view=azure-cli-latest#az-vm-assess-patches) command.
```azurecli
-az rest --method post --url https://management.azure.com/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.Compute/virtualMachines/virtualMachineName/assessPatches?api-version=2020-12-01
+az vm assess-patches -g MyResourceGroup -n MyVm
``` + # [Azure PowerShell](#tab/powershell)
-To specify the POST request, you can use the Azure PowerShell [Invoke-AzRestMethod](/powershell/module/az.accounts/invoke-azrestmethod) cmdlet.
+To specify the POST request, you can use the Azure PowerShell [Invoke-AzVMPatchAssessment](https://learn.microsoft.com/powershell/module/az.compute/invoke-azvmpatchassessment?view=azps-9.5.0) cmdlet.
```azurepowershell
-Invoke-AzRestMethod
- -ResourceGroupName resourceGroupName
- -Name "virtualMachineName"
- -ResourceProviderName "Microsoft.Compute"
- -ResourceType "virtualMachines"
- -ApiVersion xx
- -Payload '{
- }'
- -Method POST
+Invoke-AzVMPatchAssessment -ResourceGroupName "myRG" -VMName "myVM"
```
POST on 'subscriptions/{subscriptionId}/resourceGroups/acmedemo/providers/Micros
# [Azure CLI](#tab/azurecli)
-To specify the POST request, you can use the Azure CLI [az rest](/cli/azure/reference-index#az_rest) command.
+To specify the POST request, you can use the Azure CLI [az vm install-patches](https://learn.microsoft.com/cli/azure/vm?view=azure-cli-latest#az-vm-install-patches) command.
```azurecli
-az rest --method post --url https://management.azure.com/subscriptions/subscriptionId/resourceGroups/resourceGroupName/providers/Microsoft.Compute/virtualMachines/virtualMachineName/installPatches?api-version=2020-12-01 @body.json
+az vm install-patches -g MyResourceGroup -n MyVm --maximum-duration PT4H --reboot-setting IfRequired --classifications-to-include-linux Critical
``` The format of the request body for version 2020-12-01 is as follows: ```json {
- "maximumDuration": "PT120M",
- "rebootSetting": "IfRequired",
+ "maximumDuration"
+ "rebootSetting"
"windowsParameters": { "classificationsToInclude": [
- "Security",
- "UpdateRollup",
- "FeaturePack",
- "ServicePack"
], "kbNumbersToInclude": [
- "11111111111",
- "22222222222222"
], "kbNumbersToExclude": [
- "333333333333",
- "55555555555"
] } }
The format of the request body for version 2020-12-01 is as follows:
# [Azure PowerShell](#tab/azurepowershell)
-To specify the POST request, you can use the Azure PowerShell [Invoke-AzRestMethod](/powershell/module/az.accounts/invoke-azrestmethod) cmdlet.
+To specify the POST request, you can use the Azure PowerShell [Invoke-AzVMInstallPatch](/powershell/module/az.accounts/invoke-azrestmethod) cmdlet.
```azurepowershell
-Invoke-AzRestMethod
- -ResourceGroupName resourceGroupName
- -Name "machineName"
- -ResourceProviderName "Microsoft.Compute"
- -ResourceType "virtualMachines"
- -ApiVersion 2020-12-01-preview
- -Payload '{
- "maximumDuration": "PT120M",
- "rebootSetting": "IfRequired",
- "windowsParameters": {
- "classificationsToInclude": [
- "Security",
- "UpdateRollup",
- "FeaturePack",
- "ServicePack"
- ],
- "kbNumbersToInclude": [
- "11111111111",
- "22222222222222"
- ],
- "kbNumbersToExclude": [
- "333333333333",
- "55555555555"
- ]
- }
- }'
- -Method POST
+Invoke-AzVmInstallPatch -ResourceGroupName 'MyRG' -VmName 'MyVM' -Windows -RebootSetting 'never' -MaximumDuration PT2H -ClassificationToIncludeForWindows Critical
```
virtual-desktop Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/automatic-migration.md
Title: Migrate automatically from Azure Virtual Desktop (classic) - Azure
description: How to migrate automatically from Azure Virtual Desktop (classic) to Azure Virtual Desktop by using the migration module. + Last updated 01/31/2022
virtual-desktop Create Application Group Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-application-group-workspace.md
Title: Create an application group, a workspace, and assign users - Azure Virtual Desktop description: Learn how to create an application group and a workspace, and assign users in Azure Virtual Desktop by using the Azure portal, Azure CLI, or Azure PowerShell. + Last updated 03/22/2023
Now that you've created an application group and a workspace, added the applicat
- [Add session hosts to the host pool](add-session-hosts-host-pool.md), if you haven't done so already. -- [Add applications to an application group](manage-app-groups.md), if you created a RemoteApp application group.
+- [Add applications to an application group](manage-app-groups.md), if you created a RemoteApp application group.
virtual-desktop Create Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pool.md
Title: Create a host pool - Azure Virtual Desktop description: Learn how to create a host pool in Azure Virtual Desktop by using the Azure portal, Azure CLI, or Azure PowerShell. -+ Last updated 02/28/2023
Now that you've created a host pool, you'll still need to do the following tasks
- [Add session hosts to a host pool](add-session-hosts-host-pool.md). -- [Enable diagnostics settings](diagnostics-log-analytics.md).
+- [Enable diagnostics settings](diagnostics-log-analytics.md).
virtual-desktop Troubleshoot Set Up Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-issues.md
Title: Azure Virtual Desktop environment host pool creation - Azure
description: How to troubleshoot and resolve tenant and host pool issues during setup of a Azure Virtual Desktop environment. -+ Last updated 02/17/2021
virtual-desktop Create Host Pools Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-arm-template.md
Title: Azure Virtual Desktop (classic) host pool Azure Resource Manager - Azure
description: How to create a host pool in Azure Virtual Desktop (classic) with an Azure Resource Manager template. + Last updated 03/30/2020
The user's UPN should match the user's identity in Azure Active Directory (for e
After you've completed these steps, users added to the desktop application group can sign in to Azure Virtual Desktop with supported Remote Desktop clients and see a resource for a session desktop. >[!IMPORTANT]
->To help secure your Azure Virtual Desktop environment in Azure, we recommend you don't open inbound port 3389 on your VMs. Azure Virtual Desktop doesn't require an open inbound port 3389 for users to access the host pool's VMs. If you must open port 3389 for troubleshooting purposes, we recommend you use [just-in-time VM access](../../security-center/security-center-just-in-time.md).
+>To help secure your Azure Virtual Desktop environment in Azure, we recommend you don't open inbound port 3389 on your VMs. Azure Virtual Desktop doesn't require an open inbound port 3389 for users to access the host pool's VMs. If you must open port 3389 for troubleshooting purposes, we recommend you use [just-in-time VM access](../../security-center/security-center-just-in-time.md).
virtual-desktop Expand Existing Host Pool 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/expand-existing-host-pool-2019.md
Title: Expand existing Azure Virtual Desktop (classic) host pool with new sessio
description: How to expand an existing host pool with new session hosts in Azure Virtual Desktop (classic). + Last updated 03/31/2021
virtual-desktop Manage Resources Using Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui.md
Title: Deploy management tool with an Azure Resource Manager template - Azure
description: How to install a user interface tool with an Azure Resource Manager template to manage Azure Virtual Desktop (classic) resources. + Last updated 03/30/2020
virtual-desktop Troubleshoot Set Up Issues 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-issues-2019.md
Title: Azure Virtual Desktop (classic) tenant host pool creation - Azure
description: How to troubleshoot and resolve tenant and host pool issues during setup of a Azure Virtual Desktop (classic) tenant environment. + Last updated 03/30/2020
If you're running the GitHub Azure Resource Manager template, provide values for
- To learn more about the service, see [Azure Virtual Desktop environment](environment-setup-2019.md). - To go through a troubleshoot tutorial, see [Tutorial: Troubleshoot Resource Manager template deployments](../../azure-resource-manager/templates/template-tutorial-troubleshoot.md). - To learn about auditing actions, see [Audit operations with Resource Manager](../../azure-monitor/essentials/activity-log.md).-- To learn about actions to determine the errors during deployment, see [View deployment operations](../../azure-resource-manager/templates/deployment-history.md).
+- To learn about actions to determine the errors during deployment, see [View deployment operations](../../azure-resource-manager/templates/deployment-history.md).
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 03/10/2023 Last updated : 03/31/2023
Make sure to check back here often to keep up with new updates.
## Latest agent versions
-New versions of the Azure Virtual Desktop Agent are installed automatically. When new versions are released, they are rolled out progressively to all session hosts. This process is called *flighting* and it enables Microsoft to monitor the rollout. The following table lists the version that is in-flight and the version that is generally available.
+New versions of the Azure Virtual Desktop Agent are installed automatically. When new versions are released, they are rolled out progressively to all session hosts. This process is called *flighting* and it enables Microsoft to monitor the rollout in [validation environments](create-validation-host-pool.md) first. A rollout may take several weeks before the agent is available in all environments.
-| Release | Latest version |
-|||
-| Generally available | 1.0.6129.9100 |
-| In-flight | N/A |
+## Version 1.0.6298.2100
+
+This update was released at the end of March 2023 and includes the following changes:
+
+- Health check reliability improved.
+- Reliability issues in agent upgrade fixed.
+- VM will be marked unhealthy when health check detects a required URL isn't unblocked.
## Version 1.0.6129.9100
virtual-machine-scale-sets Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/azure-hybrid-benefit-linux.md
Last updated 11/22/2022 -+ # Explore Azure Hybrid Benefit for Linux Virtual Machine Scale Sets
virtual-machine-scale-sets Disk Encryption Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md
Last updated 11/22/2022 --+ # Encrypt Virtual Machine Scale Sets with Azure Resource Manager
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-powershell.md
Last updated 11/22/2022 -+ # Create virtual machines in a scale set using PowerShell
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-rest-api.md
Last updated 11/22/2022 -+ # Create virtual machines in a scale set using an ARM template
virtual-machine-scale-sets Quick Create Bicep Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-bicep-windows.md
Last updated 11/22/2022 -+ # Quickstart: Create a Windows Virtual Machine Scale Set with Bicep
virtual-machine-scale-sets Quick Create Template Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-linux.md
Last updated 11/22/2022 -+ # Quickstart: Create a Linux Virtual Machine Scale Set with an ARM template
virtual-machine-scale-sets Quick Create Template Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-windows.md
Last updated 11/22/2022 -+ # Quickstart: Create a Windows Virtual Machine Scale Set with an ARM template
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
Last updated 11/22/2022 --+ # Create a Virtual Machine Scale Set that uses Availability Zones
virtual-machines Attach Os Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/attach-os-disk.md
+
+ Title: Attach an existing OS disk to a VM
+description: Create a new Windows VM by attaching a specialized OS disk.
++++ Last updated : 03/30/2023++++
+# Create a VM from a specialized disk by using PowerShell
+
+**Applies to:** :heavy_check_mark: Windows VMs
+
+Create a new VM by attaching an existing OS disk to a new VM. This option is useful if you have a VM that isn't working correctly. You can delete the VM and then reuse the disk to create a new VM.
+
+> [!IMPORTANT]
+>
+> You can also use the VHD as a source to create an Azure Compute Gallery image. For more information, see [Create an image definition and image version](image-version.md). Customers are encouraged to use Azure Compute Gallery as all new features like ARM64, Trusted Launch, and Confidential VM, are only supported through Azure Compute Gallery.  Creating an image instead of just attaching a disk means you can create multiple VMs from the same source disk.
+>
+> When you use a specialized disk to create a new VM, the new VM retains the computer name of the original VM. Other computer-specific information (like the CMID) is also kept and, in some cases, this duplicate information could cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
+++
+We recommend that you limit the number of concurrent deployments to 20 VMs from a single VHD or snapshot.
++
+### [Portal](#tab/portal)
++
+Create a snapshot and then create a disk from the snapshot. This strategy allows you to keep the original VHD as a fallback:
+
+1. Open the [Azure portal](https://portal.azure.com).
+2. In the search box, enter **disks** and then select **Disks** to display the list of available disks.
+3. Select the disk that you would like to use. The **Disk** page for that disk appears.
+4. From the menu at the top, select **Create snapshot**.
+5. Choose a **Resource group** for the snapshot. You can use either an existing resource group or create a new one.
+6. Enter a **Name** for the snapshot.
+7. For **Snapshot type**, choose **Full**.
+8. For **Storage type**, choose **Standard HDD**, **Premium SSD**, or **Zone-redundant** storage.
+9. When you're done, select **Review + create** to create the snapshot.
+10. After the snapshot has been created, select **Home** > **Create a resource**.
+11. In the search box, enter **managed disk** and then select **Managed Disks** from the list.
+12. On the **Managed Disks** page, select **Create**.
+13. Choose a **Resource group** for the disk. You can use either an existing resource group or create a new one. This selection will also be used as the resource group where you create the VM from the disk.
+14. For **Region**, you must select the same region where the snapshot is located.
+15. Enter a **Name** for the disk.
+16. In **Source type**, ensure **Snapshot** is selected.
+17. In the **Source snapshot** drop-down, select the snapshot you want to use.
+18. For **Size**, you can change the storage type and size as needed.
+19. Make any other adjustments as needed and then select **Review + create** to create the disk. Once validation passes, select **Create**.
++
+After you have the disk that you want to use, you can create the VM in the portal:
+
+1. In the search box, enter **disks** and then select **Disks** to display the list of available disks.
+3. Select the disk that you would like to use. The **Disk** page for that disk opens.
+4. In the **Essentials** section, ensure that **Disk state** is listed as **Unattached**. If it isn't, you might need to either detach the disk from the VM or delete the VM to free up the disk.
+4. In the menu at the top of the page, select **Create VM**.
+5. On the **Basics** page for the new VM, enter a **Virtual machine name** and either select an existing **Resource group** or create a new one.
+6. For **Size**, select **Change size** to access the **Size** page.
+7. The disk name should be pre-filled in the **Image** section.
+8. On the **Disks** page, you may notice that the **OS Disk Type** cannot be changed. This preselected value is configured at the point of Snapshot or VHD creation and will carry over to the new VM. If you need to modify disk type take a new snapshot from an existing VM or disk.
+9. On the **Networking** page, you can either let the portal create all new resources or you can select an existing **Virtual network** and **Network security group**. The portal always creates a new network interface and public IP address for the new VM.
+10. On the **Management** page, make any changes to the monitoring options.
+11. On the **Guest config** page, add any extensions as needed.
+12. When you're done, select **Review + create**.
+13. If the VM configuration passes validation, select **Create** to start the deployment.
++++
+### [PowerShell](#tab/powershell)
++
+If you had a VM that you deleted and you want to reuse the OS disk to create a new VM, use [Get-AzDisk](/powershell/module/az.compute/get-azdisk).
+
+```powershell
+$resourceGroupName = 'myResourceGroup'
+$osDiskName = 'myOsDisk'
+$osDisk = Get-AzDisk `
+-ResourceGroupName $resourceGroupName `
+-DiskName $osDiskName
+```
+You can now attach this disk as the OS disk to a new VM.
+
+Create the [virtual network](../virtual-network/virtual-networks-overview.md) and subnet for the VM.
+
+1. Create the subnet. This example creates a subnet named *mySubNet*, in the resource group *myDestinationResourceGroup*, and sets the subnet address prefix to *10.0.0.0/24*.
+
+ ```powershell
+ $subnetName = 'mySubNet'
+ $singleSubnet = New-AzVirtualNetworkSubnetConfig `
+ -Name $subnetName `
+ -AddressPrefix 10.0.0.0/24
+ ```
+
+2. Create the virtual network. This example sets the virtual network name to *myVnetName*, the location to *West US*, and the address prefix for the virtual network to *10.0.0.0/16*.
+
+ ```powershell
+ $vnetName = "myVnetName"
+ $vnet = New-AzVirtualNetwork `
+ -Name $vnetName -ResourceGroupName $destinationResourceGroup `
+ -Location $location `
+ -AddressPrefix 10.0.0.0/16 `
+ -Subnet $singleSubnet
+ ```
+
+
+To be able to sign in to your VM with remote desktop protocol (RDP), you'll need to have a security rule that allows RDP access on port 3389. In our example, the VHD for the new VM was created from an existing Windows specialized VM, so you can use an account that existed on the source virtual machine for RDP. This example denies RDP traffic, to be more secure. You can change `-Access` to `Allow` if you want to allow RDP access.
+
+This example sets the network security group (NSG) name to *myNsg* and the RDP rule name to *myRdpRule*.
+
+```powershell
+$nsgName = "myNsg"
+
+$rdpRule = New-AzNetworkSecurityRuleConfig -Name myRdpRule -Description "Deny RDP" `
+ -Access Deny -Protocol Tcp -Direction Inbound -Priority 110 `
+ -SourceAddressPrefix Internet -SourcePortRange * `
+ -DestinationAddressPrefix * -DestinationPortRange 3389
+$nsg = New-AzNetworkSecurityGroup `
+ -ResourceGroupName $destinationResourceGroup `
+ -Location $location `
+ -Name $nsgName -SecurityRules $rdpRule
+
+```
+
+For more information about endpoints and NSG rules, see [Filter network traffic with a network security group](../virtual-network/tutorial-filter-network-traffic-powershell.md).
+
+To enable communication with the virtual machine in the virtual network, you'll need a [public IP address](../virtual-network/ip-services/public-ip-addresses.md) and a network interface.
+
+1. Create the public IP. In this example, the public IP address name is set to *myIP*.
+
+ ```powershell
+ $ipName = "myIP"
+ $pip = New-AzPublicIpAddress `
+ -Name $ipName -ResourceGroupName $destinationResourceGroup `
+ -Location $location `
+ -AllocationMethod Dynamic
+ ```
+
+2. Create the NIC. In this example, the NIC name is set to *myNicName*.
+
+ ```powershell
+ $nicName = "myNicName"
+ $nic = New-AzNetworkInterface -Name $nicName `
+ -ResourceGroupName $destinationResourceGroup `
+ -Location $location -SubnetId $vnet.Subnets[0].Id `
+ -PublicIpAddressId $pip.Id `
+ -NetworkSecurityGroupId $nsg.Id
+ ```
+
++
+Set the VM name and size. This example sets the VM name to *myVM* and the VM size to *Standard_A2*.
+
+```powershell
+$vmName = "myVM"
+$vmConfig = New-AzVMConfig -VMName $vmName -VMSize "Standard_A2"
+```
+
+Add the NIC.
+
+```powershell
+$vm = Add-AzVMNetworkInterface -VM $vmConfig -Id $nic.Id
+```
+
+
+Add the OS disk. Add the OS disk to the configuration by using [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk). This example sets the size of the disk to *128 GB* and attaches the disk as a *Windows* OS disk.
+
+```powershell
+$vm = Set-AzVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -StorageAccountType Standard_LRS `
+ -DiskSizeInGB 128 -CreateOption Attach -Windows
+```
+
+Create the VM by using [New-AzVM](/powershell/module/az.compute/new-azvm) with the configurations that we just created.
+
+```powershell
+New-AzVM -ResourceGroupName $destinationResourceGroup -Location $location -VM $vm
+```
+
+If this command is successful, you'll see output like this:
+
+```powershell
+RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+ - -
+ True OK OK
+
+```
+
+You should see the newly created VM either in the [Azure portal](https://portal.azure.com) under **Browse** > **Virtual machines**, or by using the following PowerShell commands.
+
+```powershell
+$vmList = Get-AzVM -ResourceGroupName $destinationResourceGroup
+$vmList.Name
+```
+
+**Next steps**
+Learn more about [Azure Compute Gallery](azure-compute-gallery.md).
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
Previously updated : 02/14/2023- Last updated : 03/23/2023+
For more information, see [Share images using a community gallery](./share-galle
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
+>
+>To publish a community gallery, you'll need to [set up preview features in your Azure subscription](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal). Creating VMs from community gallery images is open to all Azure users.
> > During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery. >
virtual-machines Capture Image Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capture-image-resource.md
Last updated 03/15/2023 --+ # Create a legacy managed image of a generalized VM in Azure
virtual-machines Classic Vm Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/classic-vm-deprecation.md
+ Last updated 02/10/2020
virtual-machines Create Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/create-gallery.md
Previously updated : 02/14/2023 Last updated : 03/23/2023 -+ ms.devlang: azurecli
During the preview, make sure that you create your gallery, image definitions, a
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). Creating VMs from the community gallery is open to all Azure users.
+> To publish a community gallery, you'll need to [set up preview features in your Azure subscription](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal). Creating VMs from community gallery images is open to all Azure users.
When creating an image to share with the community, you need to provide contact information. This information is shown **publicly**, so be careful when providing: - Community gallery prefix
virtual-machines Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption.md
Title: Server-side encryption of Azure managed disks description: Azure Storage protects your data by encrypting it at rest before persisting it to Storage clusters. You can use customer-managed keys to manage encryption with your own keys, or you can rely on Microsoft-managed keys for the encryption of your managed disks. Previously updated : 02/06/2023 Last updated : 03/23/2023
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
Last updated 03/22/2023 -+ # Using Azure ultra disks
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
Last updated 02/22/2023 -+ ms.devlang: azurecli
virtual-machines Agent Dependency Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-dependency-linux.md
description: Deploy the Azure Monitor Dependency agent on Linux virtual machine
+ Last updated 06/01/2021- # Azure Monitor Dependency virtual machine extension for Linux
virtual-machines Chef https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/chef.md
description: Deploy the Chef Client to a virtual machine using the Chef VM Exten
+
virtual-machines Custom Script Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-linux.md
-+ Last updated 04/25/2018- # Use the Azure Custom Script Extension Version 2 with Linux virtual machines
virtual-machines Diagnostics Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-template.md
description: Use an Azure Resource Manager template to create a new Windows virt
+ Last updated 05/31/2017- # Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates The Azure Diagnostics Extension provides the monitoring and diagnostics capabilities on a Windows-based Azure virtual machine. You can enable these capabilities on the virtual machine by including the extension as part of the Azure Resource Manager template. See [Authoring Azure Resource Manager Templates with VM Extensions](../windows/template-description.md#extensions) for more information on including any extension as part of a virtual machine template. This article describes how you can add the Azure Diagnostics extension to a windows virtual machine template.
Each WADMetrics table contains the following columns:
## Next Steps * For a complete sample template of a Windows virtual machine with diagnostics extension, see [vm-monitoring-diagnostics-extension](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-monitoring-diagnostics-extension) * Deploy the Azure Resource Manager template using [Azure PowerShell](../windows/ps-template.md) or [Azure Command Line](../linux/create-ssh-secured-vm-from-template.md)
-* Learn more about [authoring Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md)
+* Learn more about [authoring Azure Resource Manager templates](../../azure-resource-manager/templates/syntax.md)
virtual-machines Dsc Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-template.md
description: Learn about the Resource Manager template definition for the Desire
tags: azure-resource-manager+ keywords: 'dsc' ms.assetid: b5402e5a-1768-4075-8c19-b7f7402687af
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
Last updated 03/27/2023-+ # Network Watcher Agent virtual machine extension for Linux
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
description: Deploy the Log Analytics agent on Linux virtual machine using a vir
+ Last updated 06/15/2022- # Log Analytics virtual machine extension for Linux
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/overview.md
Title: Azure virtual machine extensions and features
-description: Learn more about Azure VM extensions
-
+description: Learn more about Azure VM extensions that provide post-deployment configuration and automation on Azure VMs.
+ Previously updated : 03/06/2023 Last updated : 03/30/2023 # Azure virtual machine extensions and features
-Extensions are small applications that provide post-deployment configuration and automation on Azure VMs. The Azure platform hosts many extensions covering VM configuration, monitoring, security, and utility applications. Publishers take an application, wrap it into an extension, and simplify the installation. All you need to do is provide mandatory parameters.
-## How can I find what extensions are available?
-You can view available extensions by selecting a VM, the selecting **Extensions** in the left menu. To pull a full list of extensions, see [Discovering VM Extensions for Linux](features-linux.md) and [Discovering VM Extensions for Windows](features-windows.md).
+Extensions are small applications that provide post-deployment configuration and automation on Azure virtual machines (VMs). The Azure platform hosts many extensions covering VM configuration, monitoring, security, and utility applications. Publishers take an application, wrap it into an extension, and simplify the installation. All you need to do is provide mandatory parameters.
-## How can I install an extension?
-Azure VM extensions can be managed using the Azure CLI, PowerShell, Resource Manager templates, and the Azure portal. To try an extension, go to the Azure portal, select the Custom Script Extension, then pass in a command or script to run the extension.
+## View available extensions
-For more information, see [Windows Custom Script Extension](custom-script-windows.md) and [Linux Custom Script Extension](custom-script-linux.md).
+You can view available extensions for a VM in the Azure portal.
-## How do I manage extension application lifecycle?
-You do not need to connect to a VM directly to install or delete an extension. The Azure extension lifecycle is managed outside of the VM and integrated into the Azure platform.
+1. In the portal, go to the **Overview** page for a VM.
+1. Under **Settings**, select **Extensions + Applications**.
-## Anything else I should be thinking about for extensions?
-Some individual VM extension applications may have their own environmental prerequisites, such as access to an endpoint. Each extension has an article that explains any pre-requisites, including which operating systems are supported.
+The list of available extensions are displayed. To see the complete list of extensions, see [Discovering VM Extensions for Linux](features-linux.md) and [Discovering VM Extensions for Windows](features-windows.md).
+
+## Install and use extensions
+
+Azure VM extensions can be managed by using the Azure CLI, PowerShell, Azure Resource Manager (ARM) templates, and the Azure portal.
+
+1. From the **Extensions + Applications** for the VM, on the **Extensions** tab, select **+ Add**.
+1. Locate the **Custom Script Extension** option. Select the extension option, then select **Next**.
+
+You can then pass in a command or script to run the extension.
+
+For more information, see [Linux Custom Script Extension](custom-script-linux.md) and [Windows Custom Script Extension](custom-script-windows.md).
+
+### Check for prerequisites
+
+Some individual VM extension applications might have their own environmental prerequisites, such as access to an endpoint. Each extension has an article that explains any prerequisites, including which operating systems are supported.
+
+### Manage extension application lifecycle
+
+You don't need to connect to a VM directly to install or delete an extension. The Azure extension lifecycle is managed outside of the VM and integrated into the Azure platform.
## Troubleshoot extensions
-If you are looking for general troubleshooting steps for Windows VM extensions, please refer to [Troubleshooting Azure Windows VM extension failures
+If you're looking for general troubleshooting steps for Windows VM extensions, refer to [Troubleshooting Azure Windows VM extension failures
](troubleshoot.md).
-Otherwise, specific troubleshooting information for each extension can be found in the **Troubleshoot and support** section in the overview for the extension. Here is a list of the troubleshooting information available:
+Otherwise, specific troubleshooting information for each extension can be found in the **Troubleshoot and support** section in the overview for the extension. Here's a list of the troubleshooting information available:
| Namespace | Troubleshooting | |--|--|
Otherwise, specific troubleshooting information for each extension can be found
| microsoft.recoveryservices.vmsnapshot | [Snapshot for Linux](vmsnapshot-linux.md#troubleshoot-and-support) | | microsoft.recoveryservices.vmsnapshot | [Snapshot for Windows](vmsnapshot-windows.md#troubleshoot-and-support) | - ## Next steps
-* For more information about how the Linux Agent and extensions work, see [Azure VM extensions and features for Linux](features-linux.md).
-* For more information about how the Windows Guest Agent and extensions work, see [Azure VM extensions and features for Windows](features-windows.md).
-* To install the Windows Guest Agent, see [Azure Windows Virtual Machine Agent Overview](agent-windows.md).
-* To install the Linux Agent, see [Azure Linux Virtual Machine Agent Overview](agent-linux.md).
+* For more information about how the Linux Agent and extensions work, see [Azure VM extensions and features for Linux](features-linux.md).
+* For more information about how the Windows Guest Agent and extensions work, see [Azure VM extensions and features for Windows](features-windows.md).
+* To install the Linux Agent, see [Azure Linux Virtual Machine Agent overview](agent-linux.md).
+* To install the Windows Guest Agent, see [Azure Windows Virtual Machine Agent overview](agent-windows.md).
virtual-machines Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/troubleshoot.md
description: Learn about troubleshooting Azure Windows VM extension failures
+
virtual-machines Vmsnapshot Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmsnapshot-linux.md
vm-linux+ Last updated 12/17/2018 - # VM Snapshot Linux extension for Azure Backup
virtual-machines Vmsnapshot Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/vmsnapshot-windows.md
+ Last updated 03/09/2023 - # VM Snapshot Windows extension for Azure Backup
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
Sysprep removes all your personal account and security information, and then pre
Make sure the server roles running on the machine are supported by Sysprep. For more information, see [Sysprep support for server roles](/windows-hardware/manufacture/desktop/sysprep-support-for-server-roles) and [Unsupported scenarios](/windows-hardware/manufacture/desktop/sysprep--system-preparation--overview#unsupported-scenarios). > [!IMPORTANT]
-> After you have run Sysprep on a VM, that VM is considered *generalized* and cannot be restarted. The process of generalizing a VM is not reversible. If you need to keep the original VM functioning, you should create a [copy of the VM](./windows/create-vm-specialized.md#option-3-copy-an-existing-azure-vm) and generalize its copy.
+> After you have run Sysprep on a VM, that VM is considered *generalized* and cannot be restarted. The process of generalizing a VM is not reversible. If you need to keep the original VM functioning, you should create a snapshot of the OS disk, create a VM from the snapshot, and then and generalize that copy of the VM
> > Sysprep requires the drives to be fully decrypted. If you have enabled encryption on your VM, disable encryption before you run Sysprep. >
virtual-machines Infrastructure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/infrastructure-automation.md
+ Last updated 02/25/2023
virtual-machines Add Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/add-disk.md
description: Learn to add a persistent data disk to your Linux VM with the Azure
+ Last updated 01/09/2023
virtual-machines Cli Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-manage.md
Title: Common Azure CLI commands
description: Learn some of the common Azure CLI commands to get you started managing your VMs in Azure Resource Manager mode + Last updated 05/12/2017 - # Common Azure CLI commands for managing Azure resources
virtual-machines Cloudinit Add User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-add-user.md
Last updated 03/29/2022 + # Use cloud-init to add a user to a Linux VM in Azure
virtual-machines Cloudinit Bash Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-bash-script.md
Last updated 03/29/2023 + # Use cloud-init to run a bash script in a Linux VM in Azure
virtual-machines Cloudinit Configure Swapfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-configure-swapfile.md
Last updated 03/29/2023 + # Use cloud-init to configure a swap partition on a Linux VM
virtual-machines Cloudinit Update Vm Hostname https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-update-vm-hostname.md
Last updated 03/29/2023 -+ # Use cloud-init to set hostname for a Linux VM in Azure
virtual-machines Cloudinit Update Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cloudinit-update-vm.md
Last updated 03/29/2023 + # Use cloud-init to update and install packages in a Linux VM in Azure
virtual-machines Create Ssh Secured Vm From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-ssh-secured-vm-from-template.md
Title: Create a Linux VM in Azure from a template
description: How to use the Azure CLI to create a Linux VM from a Resource Manager template + Last updated 03/22/2019
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/download-vhd.md
+ Last updated 01/03/2023
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
Last updated 03/15/2023
-+ # Create an Azure Image Builder Bicep or ARM JSON template
virtual-machines Image Builder Permissions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-permissions-cli.md
Last updated 04/02/2021
-+ # Configure Azure VM Image Builder permissions by using the Azure CLI
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
Last updated 02/10/2023
+ # Troubleshoot Azure VM Image Builder
Support Subtopic: Azure Image Builder
## Next steps
-For more information, see [VM Image Builder overview](../image-builder-overview.md).
+For more information, see [VM Image Builder overview](../image-builder-overview.md).
virtual-machines Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/proximity-placement-groups.md
+ Last updated 3/8/2021 - # Deploy VMs to proximity placement groups using Azure CLI
virtual-machines Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-bicep.md
Last updated 03/10/2022 -+ tags: azure-resource-manager, bicep
virtual-machines Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-template.md
Last updated 06/04/2020 -+ # Quickstart: Create an Ubuntu Linux virtual machine using an ARM template
virtual-machines Spot Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-cli.md
+ Last updated 03/22/2021
You can also create an Azure Spot Virtual Machine using [Azure PowerShell](../wi
Query current pricing information using the [Azure retail prices API](/rest/api/cost-management/retail-prices/azure-retail-prices) for information about Azure Spot Virtual Machine. The `meterName` and `skuName` will both contain `Spot`.
-If you encounter an error, see [Error codes](../error-codes-spot.md).
+If you encounter an error, see [Error codes](../error-codes-spot.md).
virtual-machines Maintenance Notifications Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-notifications-cli.md
description: View maintenance notifications for virtual machines running in Azur
+ Last updated 11/19/2019 #pmcontact: shants
virtual-machines Migration Classic Resource Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-cli.md
Last updated 01/23/2023 --+ # Migrate IaaS resources from classic to Azure Resource Manager by using Azure CLI
virtual-machines Migration Classic Resource Manager Community Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-community-tools.md
Last updated 01/25/2023 --+ # Community tools to migrate IaaS resources from classic to Azure Resource Manager
virtual-machines Migration Classic Resource Manager Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-deep-dive.md
Last updated 1/25/2023 --+ # Technical deep dive on platform-supported migration from classic to Azure Resource Manager
virtual-machines Migration Classic Resource Manager Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-errors.md
Last updated 03/08/2023 -+ # Errors that commonly occur during Classic to Azure Resource Manager migration
virtual-machines Migration Classic Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-overview.md
Last updated 1/25/2023 --+ # Platform-supported migration of IaaS resources from classic to Azure Resource Manager
virtual-machines Migration Classic Resource Manager Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-plan.md
Last updated 01/25/2023 -+
virtual-machines Migration Classic Resource Manager Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-ps.md
Last updated 01/25/2023 --+ # Migrate IaaS resources from classic to Azure Resource Manager by using PowerShell
virtual-machines Copy Managed Disks To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-to-same-or-different-subscription.md
Last updated 02/22/2023 -+ # Copy managed disks to same or different subscription with CLI
virtual-machines Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-same-or-different-subscription.md
Last updated 02/22/2023 -+ # Copy snapshot of a managed disk to same or different subscription with CLI
virtual-machines Copy Snapshot To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-storage-account.md
Last updated 02/23/2022 -+ # Export/Copy a snapshot to a storage account in different region with CLI
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
vm-linux
Last updated 02/22/2023 -+ # Create a managed disk from a snapshot with CLI (Linux)
virtual-machines Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-vhd.md
Last updated 02/23/2022 -+ # Create a managed disk from a VHD file in a storage account in the same subscription with CLI (Linux)
virtual-machines Create Vm From Managed Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-managed-os-disks.md
vm-linux
Last updated 02/23/2022 -+ # Create a virtual machine using an existing managed OS disk with CLI
virtual-machines Create Vm From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-snapshot.md
vm-linux
Last updated 02/23/2022 -+ # Create a virtual machine from a snapshot with CLI
virtual-machines Virtual Machines Powershell Sample Copy Managed Disks Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-managed-disks-vhd.md
tags: azure-service-management+ ms.assetid:
virtual-machines Virtual Machines Powershell Sample Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-snapshot-to-same-or-different-subscription.md
tags: azure-service-management+ ms.assetid:
virtual-machines Virtual Machines Powershell Sample Copy Snapshot To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-copy-snapshot-to-storage-account.md
+ Last updated 06/05/2017
virtual-machines Virtual Machines Powershell Sample Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot.md
vm-windows+ Last updated 06/05/2017
virtual-machines Virtual Machines Powershell Sample Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-vhd.md
+ Last updated 06/05/2017
virtual-machines Virtual Machines Powershell Sample Create Snapshot From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-snapshot-from-vhd.md
-
+ Title: VHD snapshot to make many identical managed disks (Windows) - PowerShell description: Azure PowerShell Script Sample - Create a snapshot from a VHD to create multiple identical managed disks in small amount of time documentationcenter: storage
vm-windows+ Last updated 06/05/2017
virtual-machines Share Gallery Community https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-community.md
Previously updated : 07/07/2022 Last updated : 03/30/2023 -+ ms.devlang: azurecli
Sharing images to the community is a new capability in [Azure Compute Gallery](.
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
-> To publish a community gallery, you need to register for the preview at [https://aka.ms/communitygallery-preview](https://aka.ms/communitygallery-preview). It will take up to 10 business days after submitting the form to approve the feature. Creating VMs from the community gallery is open to all Azure users.
+> To publish a community gallery, you'll need to [set up preview features in your Azure subscription](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal) and set up 'CommunityGallery'. Creating VMs from community gallery images is open to all Azure users.
> > During the preview, the gallery must be created as a community gallery (for CLI, this means using the `--permissions community` parameter) you currently can't migrate a regular gallery to a community gallery. >
Information from your image definitions will also be publicly available, like wh
> > If you stop sharing your gallery during the preview, you won't be able to re-share it.
+## Reporting issues with a public image
+Utilizing community-submitted virtual machine images has several risks. Certain images could harbor malware, security vulnerabilities, or violate someone's intellectual property. To help create a secure and reliable experience for the community, you can report images in which you see these issues.
+
+### Reporting images through the Azure portal:
+Selecting a community image will show several "Report" options. You can report the whole image, or report a specific version if previous versions were unaffected by the issue you encountered.
++
+### Reporting images externally:
+- Malicious images: Contact [Abuse Report](https://msrc.microsoft.com/report/abuse).
+
+- Intellectual Property violations: Contact [Infringement Report](https://msrc.microsoft.com/report/infringement).
+
+ ## Start sharing publicly In order to share a gallery publicly, it needs to be created as a community gallery. For more information, see [Create a community gallery](create-gallery.md#create-a-community-gallery)
virtual-machines Ssh Keys Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ssh-keys-azure-cli.md
+ Last updated 11/17/2021 - # Generate and store SSH keys with the Azure CLI
virtual-machines Update Image Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/update-image-resources.md
Title: List, update, and delete resources
description: List, update, and delete resources in your Azure Compute Gallery. -+ Previously updated : 04/20/2022 Last updated : 03/23/2023
Remove-AzResourceGroup -Name $resourceGroup
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> To publish a community gallery, you'll need to [set up preview features in your Azure subscription](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal). Creating VMs from community gallery images is open to all Azure users.
To list your own galleries, and output the public names for your community galleries:
virtual-machines Using Managed Disks Template Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/using-managed-disks-template-deployments.md
+ Last updated 06/01/2017
To find full information on the REST API specifications, please review the [crea
* Visit the [Azure Managed Disks Overview](managed-disks-overview.md) document to learn more about managed disks. * Review the template reference documentation for virtual machine resources by visiting the [Microsoft.Compute/virtualMachines template reference](/azure/templates/microsoft.compute/virtualmachines) document. * Review the template reference documentation for disk resources by visiting the [Microsoft.Compute/disks template reference](/azure/templates/microsoft.compute/disks) document.
-* For information on how to use managed disks in Azure virtual machine scale sets, visit the [Use data disks with scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks.md) document.
+* For information on how to use managed disks in Azure virtual machine scale sets, visit the [Use data disks with scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-attached-disks.md) document.
virtual-machines Virtual Machine Scale Sets Maintenance Control Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-template.md
Last updated 11/22/2022 -+ #pmcontact: PPHILLIPS
For more information, see [configurationAssignments](/azure/templates/microsoft.
## Next steps > [!div class="nextstepaction"]
-> [Learn about maintenance and updates for virtual machines running in Azure](maintenance-and-updates.md)
+> [Learn about maintenance and updates for virtual machines running in Azure](maintenance-and-updates.md)
virtual-machines Virtual Machines Create Restore Points Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points-cli.md
Last updated 06/30/2022-+
az disk create --resource-group ΓÇ£ExampleRgΓÇ¥ --name ΓÇ£ExampleDataDisk1ΓÇ¥ --
Once you have created the disks, [create a new VM](./scripts/create-vm-from-managed-os-disks.md) and [attach these restored disks](./linux/add-disk.md#attach-an-existing-disk) to the newly created VM. ## Next steps
-[Learn more](./backup-recovery.md) about Backup and restore options for virtual machines in Azure.
+[Learn more](./backup-recovery.md) about Backup and restore options for virtual machines in Azure.
virtual-machines Vm Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-generalized-image-version.md
Previously updated : 02/16/2023 Last updated : 03/23/2023 -+
New-AzVM -ResourceGroupName $resourceGroup -Location $location -VM $vmConfig
> [!IMPORTANT] > Azure Compute Gallery ΓÇô community galleries is currently in PREVIEW and subject to the [Preview Terms for Azure Compute Gallery - community gallery](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). >
+>To publish a community gallery, you'll need to [set up preview features in your Azure subscription](/azure/azure-resource-manager/management/preview-features?tabs=azure-portal). Creating VMs from community gallery images is open to all Azure users.
+>
> Microsoft does not provide support for images in the [community gallery](azure-compute-gallery.md#community).
+## Reporting issues with a public image
+Utilizing community-submitted virtual machine images has several risks. Certain images could harbor malware, security vulnerabilities, or violate someone's intellectual property. To help create a secure and reliable experience for the community, you can report images in which you see these issues.
+
+### Reporting images through the Azure portal:
+Selecting a community image will show several "Report" options. You can report the whole image, or report a specific version if previous versions were unaffected by the issue you encountered.
++
+### Reporting images externally:
+- Malicious images: Contact [Abuse Report](https://msrc.microsoft.com/report/abuse).
+
+- Intellectual Property violations: Contact [Infringement Report](https://msrc.microsoft.com/report/infringement).
+ ### [CLI](#tab/cli3)
virtual-machines Vm Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-specialized-image-version.md
Previously updated : 02/14/2023 Last updated : 03/23/2023 -+
New-AzVM `
> > Microsoft does not provide support for images in the [community gallery](azure-compute-gallery.md#community).
+### Reporting issues with a public image
+Utilizing community-submitted virtual machine images has several risks. Certain images could harbor malware, security vulnerabilities, or violate someone's intellectual property. To help create a secure and reliable experience for the community, you can report images in which you see these issues.
+
+#### Reporting images through the Azure portal:
+Selecting a community image will show several "Report" options. You can report the whole image, or report a specific version if previous versions were unaffected by the issue you encountered.
++
+#### Reporting images externally:
+- Malicious images: Contact [Abuse Report](https://msrc.microsoft.com/report/abuse).
+
+- Intellectual Property violations: Contact [Infringement Report](https://msrc.microsoft.com/report/infringement).
+ ### [CLI](#tab/cli4)
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/build-image-with-packer.md
Title: PowerShell - How to create VM Images with Packer description: Learn how to use Packer and PowerShell to create images of virtual machines in Azure-+ Previously updated : 08/05/2020- Last updated : 03/31/2023+
Each virtual machine (VM) in Azure is created from an image that defines the Windows distribution and OS version. Images can include pre-installed applications and configurations. The Azure Marketplace provides many first and third-party images for most common OS' and application environments, or you can create your own custom images tailored to your needs. This article details how to use the open-source tool [Packer](https://www.packer.io/) to define and build custom images in Azure.
-This article was last tested on 8/5/2020 using [Packer](https://www.packer.io/docs/install) version 1.6.1.
+This article was last tested on 8/5/2020 using [Packer](https://www.packer.io/docs/install) version 1.8.1.
> [!NOTE] > Azure now has a service, Azure Image Builder, for defining and creating your own custom images. Azure Image Builder is built on Packer, so you can even use your existing Packer shell provisioner scripts with it. To get started with Azure Image Builder, see [Create a Windows VM with Azure Image Builder](image-builder.md).
If you don't already have Packer installed on your local machine, [follow the Pa
Build the image by opening a cmd prompt and specifying your Packer template file as follows:
-```
-./packer build windows.json
+```powershell
+packer build windows.json
``` You can also build the image by specifying the *windows.pkr.hcl* file as follows:
packer build windows.pkr.hcl
An example of the output from the preceding commands is as follows:
-```bash
+```powershell
azure-arm output will be in this color. ==> azure-arm: Running builder ...
virtual-machines Connect Winrm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/connect-winrm.md
Last updated 3/25/2022 --+ # Setting up WinRM access for Virtual Machines in Azure Resource Manager **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
virtual-machines Create Vm Specialized Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/create-vm-specialized-portal.md
- Title: Create a Windows VM from a specialized VHD in the Azure portal
-description: Create a new Windows VM from a VHD in the Azure portal.
------ Previously updated : 02/24/2023---
-# Create a VM from a VHD by using the Azure portal
-
-**Applies to:** :heavy_check_mark: Windows VMs
--
-> [!NOTE]
-> Customers are encouraged to use Azure Compute Gallery as all new features like ARM64, Trusted Launch, and Confidential VM are only supported through Azure Compute Gallery.  If you have an existing VHD or managed image, you can use it as a source and create an Azure Compute Gallery image. For more information, see [Create an image definition and image version](../image-version.md).
->
-> Creating an image instead of just attaching a disk means you can create multiple VMs from the same sounrce disk.
--
-There are several ways to create a virtual machine (VM) in Azure:
--- If you already have a virtual hard disk (VHD) to use or you want to copy the VHD from an existing VM to use, you can create a new VM by *attaching* the VHD to the new VM as an OS disk.--- You can create a new VM from the VHD of a VM that has been deleted. For example, if you have an Azure VM that isn't working correctly, you can delete the VM and use its VHD to create a new VM. You can either reuse the same VHD or create a copy of the VHD by creating a snapshot and then creating a new managed disk from the snapshot. Although creating a snapshot takes a few more steps, it preserves the original VHD and provides you with a fallback.--- You can create an Azure VM from an on-premises VHD by uploading the on-premises VHD and attaching it to a new VM. You use PowerShell or another tool to upload the VHD to a storage account, and then you create a managed disk from the VHD. For more information, see [Upload a specialized VHD](create-vm-specialized.md#option-2-upload-a-specialized-vhd).---
-> [!IMPORTANT]
->
-> When you use a [specialized](shared-image-galleries.md#generalized-and-specialized-images) disk to create a new VM, the new VM retains the computer name of the original VM. Other computer-specific information (e.g. CMID) is also kept and, in some cases, this duplicate information could cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
-> Don't use a specialized disk if you want to create multiple VMs. Instead, for larger deployments, create an image and then use that image to create multiple VMs.
-> For more information, see [Store and share images in an Azure Compute Gallery](shared-image-galleries.md).
-
-We recommend that you limit the number of concurrent deployments to 20 VMs from a single snapshot or VHD.
-
-## Copy a disk
-
-Create a snapshot and then create a disk from the snapshot. This strategy allows you to keep the original VHD as a fallback:
-
-1. Open the [Azure portal](https://portal.azure.com).
-2. In the search box, enter **disks** and then select **Disks** to display the list of available disks.
-3. Select the disk that you would like to use. The **Disk** page for that disk appears.
-4. From the menu at the top, select **Create snapshot**.
-5. Choose a **Resource group** for the snapshot. You can use either an existing resource group or create a new one.
-6. Enter a **Name** for the snapshot.
-7. For **Snapshot type**, choose **Full**.
-8. For **Storage type**, choose **Standard HDD**, **Premium SSD**, or **Zone-redundant** storage.
-9. When you're done, select **Review + create** to create the snapshot.
-10. After the snapshot has been created, select **Home** > **Create a resource**.
-11. In the search box, enter **managed disk** and then select **Managed Disks** from the list.
-12. On the **Managed Disks** page, select **Create**.
-13. Choose a **Resource group** for the disk. You can use either an existing resource group or create a new one. This selection will also be used as the resource group where you create the VM from the disk.
-14. For **Region**, you must select the same region where the snapshot is located.
-15. Enter a **Name** for the disk.
-16. In **Source type**, ensure **Snapshot** is selected.
-17. In the **Source snapshot** drop-down, select the snapshot you want to use.
-18. For **Size**, you can change the storage type and size as needed.
-19. Make any other adjustments as needed and then select **Review + create** to create the disk. Once validation passes, select **Create**.
-
-## Create a VM from a disk
-
-After you have the managed disk VHD that you want to use, you can create the VM in the portal:
-
-1. In the search box, enter **disks** and then select **Disks** to display the list of available disks.
-3. Select the disk that you would like to use. The **Disk** page for that disk opens.
-4. In the **Essentials** section, ensure that **Disk state** is listed as **Unattached**. If it isn't, you might need to either detach the disk from the VM or delete the VM to free up the disk.
-4. In the menu at the top of the page, select **Create VM**.
-5. On the **Basics** page for the new VM, enter a **Virtual machine name** and either select an existing **Resource group** or create a new one.
-6. For **Size**, select **Change size** to access the **Size** page.
-7. The disk name should be pre-filled in the **Image** section.
-8. On the **Disks** page, you may notice that the "OS Disk Type" cannot be changed. This preselected value is configured at the point of Snapshot or VHD creation and will carry over to the new VM. If you need to modify disk type take a new snapshot from an existing VM or disk.
-9. On the **Networking** page, you can either let the portal create all new resources or you can select an existing **Virtual network** and **Network security group**. The portal always creates a new network interface and public IP address for the new VM.
-10. On the **Management** page, make any changes to the monitoring options.
-11. On the **Guest config** page, add any extensions as needed.
-12. When you're done, select **Review + create**.
-13. If the VM configuration passes validation, select **Create** to start the deployment.
--
-## Next steps
-
-You can also [create an image definition and image version](../image-version.md) from your VHD.
virtual-machines Create Vm Specialized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/create-vm-specialized.md
- Title: Create a Windows VM from a specialized VHD in Azure
-description: Create a new Windows VM by attaching a specialized managed disk as the OS disk by using the Resource Manager deployment model.
------ Previously updated : 10/10/2019----
-# Create a Windows VM from a specialized disk by using PowerShell
-
-**Applies to:** :heavy_check_mark: Windows VMs
-
-Create a new VM by attaching a specialized managed disk as the OS disk. A specialized disk is a copy of a virtual hard disk (VHD) from an existing VM that contains the user accounts, applications, and other state data from your original VM.
-
-You have several options:
-* [Use an existing managed disk](#option-1-use-an-existing-disk). This option is useful if you have a VM that isn't working correctly. You can delete the VM and then reuse the managed disk to create a new VM.
-* [Upload a VHD](#option-2-upload-a-specialized-vhd)
-* [Copy an existing Azure VM by using snapshots](#option-3-copy-an-existing-azure-vm)
-
-You can also use the Azure portal to [create a new VM from a specialized VHD](create-vm-specialized-portal.md).
-
-This article shows you how to use managed disks. If you have a legacy deployment that requires using a storage account, see [Create a VM from a specialized VHD in a storage account](/previous-versions/azure/virtual-machines/windows/sa-create-vm-specialized).
-
-> [!IMPORTANT]
->
-> When you use a specialized disk to create a new VM, the new VM retains the computer name of the original VM. Other computer-specific information (e.g. CMID) is also kept and, in some cases, this duplicate information could cause issues. When copying a VM, be aware of what types of computer-specific information your applications rely on.
-> Thus, don't use a specialized disk if you want to create multiple VMs. Instead, for larger deployments, [create an image](capture-image-resource.md) and then [use that image to create multiple VMs](create-vm-generalized-managed.md).
-
-We recommend that you limit the number of concurrent deployments to 20 VMs from a single VHD or snapshot.
-
-## Option 1: Use an existing disk
-
-If you had a VM that you deleted and you want to reuse the OS disk to create a new VM, use [Get-AzDisk](/powershell/module/az.compute/get-azdisk).
-
-```powershell
-$resourceGroupName = 'myResourceGroup'
-$osDiskName = 'myOsDisk'
-$osDisk = Get-AzDisk `
--ResourceGroupName $resourceGroupName `--DiskName $osDiskName
-```
-You can now attach this disk as the OS disk to a [new VM](#create-the-new-vm).
-
-## Option 2: Upload a specialized VHD
-
-You can upload the VHD from a specialized VM created with an on-premises virtualization tool, like Hyper-V, or a VM exported from another cloud.
-
-### Prepare the VM
-Use the VHD as-is to create a new VM.
-
- * [Prepare a Windows VHD to upload to Azure](prepare-for-upload-vhd-image.md). **Do not** generalize the VM by using Sysprep.
- * Remove any guest virtualization tools and agents that are installed on the VM (such as VMware tools).
- * Make sure the VM is configured to get the IP address and DNS settings from DHCP. This ensures that the server obtains an IP address within the virtual network when it starts up.
--
-### Upload the VHD
-
-You can now upload a VHD straight into a managed disk. For instructions, see [Upload a VHD to Azure using Azure PowerShell](disks-upload-vhd-to-managed-disk-powershell.md).
-
-## Option 3: Copy an existing Azure VM
-
-You can create a copy of a VM that uses managed disks by taking a snapshot of the VM, and then by using that snapshot to create a new managed disk and a new VM.
-
-If you want to copy an existing VM to another region, you might want to use azcopy to [create a copy of a disk in another region](disks-upload-vhd-to-managed-disk-powershell.md#copy-a-managed-disk).
-
-### Take a snapshot of the OS disk
-
-You can take a snapshot of an entire VM (including all disks) or of just a single disk. The following steps show you how to take a snapshot of just the OS disk of your VM with the [New-AzSnapshot](/powershell/module/az.compute/new-azsnapshot) cmdlet.
-
-First, set some parameters.
-
- ```powershell
-$resourceGroupName = 'myResourceGroup'
-$vmName = 'myVM'
-$location = 'westus'
-$snapshotName = 'mySnapshot'
-```
-
-Get the VM object.
-
-```powershell
-$vm = Get-AzVM -Name $vmName `
- -ResourceGroupName $resourceGroupName
-```
-Get the OS disk name.
-
- ```powershell
-$disk = Get-AzDisk -ResourceGroupName $resourceGroupName `
- -DiskName $vm.StorageProfile.OsDisk.Name
-```
-
-Create the snapshot configuration.
-
- ```powershell
-$snapshotConfig = New-AzSnapshotConfig `
- -SourceUri $disk.Id `
- -OsType Windows `
- -CreateOption Copy `
- -Location $location
-```
-
-Take the snapshot.
-
-```powershell
-$snapShot = New-AzSnapshot `
- -Snapshot $snapshotConfig `
- -SnapshotName $snapshotName `
- -ResourceGroupName $resourceGroupName
-```
--
-To use this snapshot to create a VM that needs to be high-performing, add the parameter `-AccountType Premium_LRS` to the New-AzSnapshotConfig command. This parameter creates the snapshot so that it's stored as a Premium Managed Disk. Premium Managed Disks are more expensive than Standard, so be sure you'll need Premium before using this parameter.
-
-### Create a new disk from the snapshot
-
-Create a managed disk from the snapshot by using [New-AzDisk](/powershell/module/az.compute/new-azdisk). This example uses *myOSDisk* for the disk name.
-
-Create a new resource group for the new VM.
-
-```powershell
-$destinationResourceGroup = 'myDestinationResourceGroup'
-New-AzResourceGroup -Location $location `
- -Name $destinationResourceGroup
-```
-
-Set the OS disk name.
-
-```powershell
-$osDiskName = 'myOsDisk'
-```
-
-Create the managed disk.
-
-```powershell
-$osDisk = New-AzDisk -DiskName $osDiskName -Disk `
- (New-AzDiskConfig -Location $location -CreateOption Copy `
- -SourceResourceId $snapshot.Id) `
- -ResourceGroupName $destinationResourceGroup
-```
--
-## Create the new VM
-
-Create networking and other VM resources to be used by the new VM.
-
-### Create the subnet and virtual network
-
-Create the [virtual network](../../virtual-network/virtual-networks-overview.md) and subnet for the VM.
-
-1. Create the subnet. This example creates a subnet named *mySubNet*, in the resource group *myDestinationResourceGroup*, and sets the subnet address prefix to *10.0.0.0/24*.
-
- ```powershell
- $subnetName = 'mySubNet'
- $singleSubnet = New-AzVirtualNetworkSubnetConfig `
- -Name $subnetName `
- -AddressPrefix 10.0.0.0/24
- ```
-
-2. Create the virtual network. This example sets the virtual network name to *myVnetName*, the location to *West US*, and the address prefix for the virtual network to *10.0.0.0/16*.
-
- ```powershell
- $vnetName = "myVnetName"
- $vnet = New-AzVirtualNetwork `
- -Name $vnetName -ResourceGroupName $destinationResourceGroup `
- -Location $location `
- -AddressPrefix 10.0.0.0/16 `
- -Subnet $singleSubnet
- ```
-
-
-### Create the network security group and an RDP rule
-To be able to sign in to your VM with remote desktop protocol (RDP), you'll need to have a security rule that allows RDP access on port 3389. In our example, the VHD for the new VM was created from an existing specialized VM, so you can use an account that existed on the source virtual machine for RDP. This example denies RDP traffic, to be more secure. You can change `-Access` to `Allow` if you want to allow RDP access.
-
-This example sets the network security group (NSG) name to *myNsg* and the RDP rule name to *myRdpRule*.
-
-```powershell
-$nsgName = "myNsg"
-
-$rdpRule = New-AzNetworkSecurityRuleConfig -Name myRdpRule -Description "Deny RDP" `
- -Access Deny -Protocol Tcp -Direction Inbound -Priority 110 `
- -SourceAddressPrefix Internet -SourcePortRange * `
- -DestinationAddressPrefix * -DestinationPortRange 3389
-$nsg = New-AzNetworkSecurityGroup `
- -ResourceGroupName $destinationResourceGroup `
- -Location $location `
- -Name $nsgName -SecurityRules $rdpRule
-
-```
-
-For more information about endpoints and NSG rules, see [Opening ports to a VM in Azure by using PowerShell](nsg-quickstart-powershell.md).
-
-### Create a public IP address and NIC
-To enable communication with the virtual machine in the virtual network, you'll need a [public IP address](../../virtual-network/ip-services/public-ip-addresses.md) and a network interface.
-
-1. Create the public IP. In this example, the public IP address name is set to *myIP*.
-
- ```powershell
- $ipName = "myIP"
- $pip = New-AzPublicIpAddress `
- -Name $ipName -ResourceGroupName $destinationResourceGroup `
- -Location $location `
- -AllocationMethod Dynamic
- ```
-
-2. Create the NIC. In this example, the NIC name is set to *myNicName*.
-
- ```powershell
- $nicName = "myNicName"
- $nic = New-AzNetworkInterface -Name $nicName `
- -ResourceGroupName $destinationResourceGroup `
- -Location $location -SubnetId $vnet.Subnets[0].Id `
- -PublicIpAddressId $pip.Id `
- -NetworkSecurityGroupId $nsg.Id
- ```
-
--
-### Set the VM name and size
-
-This example sets the VM name to *myVM* and the VM size to *Standard_A2*.
-
-```powershell
-$vmName = "myVM"
-$vmConfig = New-AzVMConfig -VMName $vmName -VMSize "Standard_A2"
-```
-
-### Add the NIC
-
-```powershell
-$vm = Add-AzVMNetworkInterface -VM $vmConfig -Id $nic.Id
-```
-
-
-### Add the OS disk
-
-Add the OS disk to the configuration by using [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk). This example sets the size of the disk to *128 GB* and attaches the managed disk as a *Windows* OS disk.
-
-```powershell
-$vm = Set-AzVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -StorageAccountType Standard_LRS `
- -DiskSizeInGB 128 -CreateOption Attach -Windows
-```
-
-### Complete the VM
-
-Create the VM by using [New-AzVM](/powershell/module/az.compute/new-azvm) with the configurations that we just created.
-
-```powershell
-New-AzVM -ResourceGroupName $destinationResourceGroup -Location $location -VM $vm
-```
-
-If this command is successful, you'll see output like this:
-
-```powershell
-RequestId IsSuccessStatusCode StatusCode ReasonPhrase
- - -
- True OK OK
-
-```
-
-### Verify that the VM was created
-You should see the newly created VM either in the [Azure portal](https://portal.azure.com) under **Browse** > **Virtual machines**, or by using the following PowerShell commands.
-
-```powershell
-$vmList = Get-AzVM -ResourceGroupName $destinationResourceGroup
-$vmList.Name
-```
-
-## Next steps
-Sign in to your new virtual machine. For more information, see [How to connect and log on to an Azure virtual machine running Windows](connect-logon.md).
virtual-machines Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/java.md
Last updated 10/09/2021-+ - # Create and manage Windows VMs in Azure using Java
It should take about five minutes for this console application to run completely
## Next steps
-* Learn more about using the [Azure libraries for Java](/java/azure/java-sdk-azure-overview).
+* Learn more about using the [Azure libraries for Java](/java/azure/java-sdk-azure-overview).
virtual-machines Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-bicep.md
Last updated 03/11/2022 -+ # Quickstart: Create a Windows virtual machine using a Bicep file
Remove-AzResourceGroup -Name exampleRG
In this quickstart, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs. > [!div class="nextstepaction"]
-> [Azure Windows virtual machine tutorials](./tutorial-manage-vm.md)
+> [Azure Windows virtual machine tutorials](./tutorial-manage-vm.md)
virtual-machines Template Description https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/template-description.md
description: Learn more about how the virtual machine resource is defined in an
+ Last updated 01/03/2019
virtual-machines Byos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/byos.md
description: Learn about bring-your-own-subscription images for Red Hat Enterpri
+ Last updated 06/10/2020
virtual-network-manager How To Create Mesh Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-mesh-network-powershell.md
Last updated 03/22/2023-+ # Create a mesh network topology with Azure Virtual Network Manager - Azure PowerShell
virtual-network Create Peering Different Deployment Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-deployment-models.md
tags: azure-resource-manager+
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
description: Learn about how to create a custom IP address prefix using the Azur
+ Last updated 03/31/2022
As before, the operation is asynchronous. Use [az network custom-ip prefix show]
- To create a custom IP address prefix using the Azure CLI, see [Create custom IP address prefix using the Azure CLI](create-custom-ip-address-prefix-cli.md). -- To create a custom IP address prefix using the Azure portal, see [Create a custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md).
+- To create a custom IP address prefix using the Azure portal, see [Create a custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md).
virtual-network Create Custom Ip Address Prefix Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-cli.md
description: Learn about how to create a custom IPv6 address prefix using Azure
+ Last updated 03/31/2022
virtual-network Create Custom Ip Address Prefix Ipv6 Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-powershell.md
description: Learn about how to create a custom IPv6 address prefix using Azure
+ Last updated 03/31/2022
virtual-network Create Custom Ip Address Prefix Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md
description: Learn about how to create a custom IPv4 address prefix using Azure
+ Last updated 03/31/2022
virtual-network Manage Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-public-ip-address-prefix.md
Title: Create, change, or delete an Azure public IP address prefix description: Learn about public IP address prefixes and how to create, change, or delete them.- Previously updated : 05/13/2019 Last updated : 03/30/2023 # Manage a public IP address prefix
-A public IP address prefix is a contiguous range of standard SKU public IP addresses. When you create a public IP address resource, you can assign a static public IP from the prefix and associate the address to Azure resources. For more information, see [Public IP address prefix overview](public-ip-address-prefix.md). This article explains how to create, modify, or delete public IP address prefixes, as well as creating public IPs from an existing prefix.
+A public IP address prefix is a contiguous range of standard SKU public IP addresses. When you create a public IP address resource, you can assign a static public IP from the prefix and associate the address to Azure resources. For more information, see [Public IP address prefix overview](public-ip-address-prefix.md). This article explains how to create, modify, or delete public IP address prefixes, and create public IPs from an existing prefix.
## Create a public IP address prefix The following section details the parameters when creating a public IP prefix.
- |Setting|Required?|Details|
- ||||
- |Subscription|Yes|Must exist in the same [subscription](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription) as the resource you want to associate the public IP address to.|
- |Resource group|Yes|Can exist in the same, or different, [resource group](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) as the resource you want to associate the public IP address to.|
- |Name|Yes|The name must be unique within the resource group you select.|
- |Region|Yes|Must exist in the same [region](https://azure.microsoft.com/regions)as the public IP addresses you'll assign addresses from the range.|
- |IP version|Yes| IP version of the prefix (v4 or v6).
- |Prefix size|Yes| The size of the prefix you need. A range with 16 IP addresses (/28 for v4 or /124 for v6) is the default.
+ | Setting | Required? | Details |
+ | | | |
+ | Subscription|Yes|Must exist in the same [subscription](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription) as the resource you want to associate the public IP address to. |
+ | Resource group|Yes|Can exist in the same, or different, [resource group](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) as the resource you want to associate the public IP address to. |
+ | Name | Yes | The name must be unique within the resource group you select.|
+ | Region | Yes | Must exist in the same [region](https://azure.microsoft.com/regions)as the public IP addresses assigned from the range. |
+ | IP version | Yes | IP version of the prefix (v4 or v6). |
+ | Prefix size | Yes | The size of the prefix you need. A range with 16 IP addresses (/28 for v4 or /124 for v6) is the default. |
-Alternatively, you may use the CLI and PowerShell commands below to create a public IP address prefix.
+Alternatively, you may use the following CLI and PowerShell commands to create a public IP address prefix.
**Commands**
-|Tool|Command|
-|||
-|CLI|[az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-create)|
-|PowerShell|[New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefix)|
+| Tool | Command |
+| | |
+| CLI | [az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-create) |
+| PowerShell |[New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefix) |
>[!NOTE] >In regions with availability zones, you can use PowerShell or CLI commands to create a public IP address prefix as either: non-zonal, associated with a specific zone, or to use zone-redundancy. For API version 2020-08-01 or later, if a zone parameter is not provided, a non-zonal public IP address prefix is created. For versions of the API older than 2020-08-01, a zone-redundant public IP address prefix is created.
Alternatively, you may use the CLI and PowerShell commands below to create a pub
The following section details the parameters required when creating a static public IP address from a prefix.
- |Setting|Required?|Details|
- ||||
- |Name|Yes|The name of the public IP address must be unique within the resource group you select.|
- |Idle timeout (minutes)|No|How many minutes to keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. |
- |DNS name label|No|Must be unique within the Azure region you create the name in (across all subscriptions and all customers). Azure automatically registers the name and IP address in its DNS so you can connect to a resource with the name. Azure appends a default subnet *location.cloudapp.azure.com* to the name you provide to create the fully qualified DNS name. For more information, see [Use Azure DNS with an Azure public IP address](../../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address).|
+ | Setting | Required? | Details |
+ | | | |
+ | Name | Yes | The name of the public IP address must be unique within the resource group you select. |
+ | Idle timeout (minutes)| No| How many minutes to keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. |
+ | DNS name label | No | Must be unique within the Azure region you create the name in (across all subscriptions and all customers). </br> Azure automatically registers the name and IP address in its DNS so you can connect to a resource with the name. </br> Azure appends a default subnet *location.cloudapp.azure.com* to the name you provide to create the fully qualified DNS name. </br> For more information, see [Use Azure DNS with an Azure public IP address](../../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address). |
-Alternatively, you may use the CLI and PowerShell commands below with the **--public-ip-prefix (CLI)** and **-PublicIpPrefix (PowerShell)** parameters, to create a public IP address resource from a prefix.
+Alternatively, you may use the following CLI and PowerShell commands with the **`--public-ip-prefix`** **(CLI)** and **`-PublicIpPrefix`** **(PowerShell)** parameters, to create a public IP address resource from a prefix.
-|Tool|Command|
-|||
-|CLI|[az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create)|
-|PowerShell|[New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress)|
+| Tool | Command |
+| | |
+| CLI | [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) |
+| PowerShell | [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) |
>[!NOTE] >When requesting a Public IP address from a Public IP Prefix, the allocation is not deterministic or sequential. If a specific Public IP address from a Public IP Prefix is required, the PowerShell or CLI commands allow for this. For PowerShell, the `IpAddress` parameter (followed by the desired IP) should be used; for CLI, the `ip-address` parameter (followed by the desired IP) should be used.
To view or delete a prefix, the following commands can be used in Azure CLI and
**Commands**
-|Tool|Command|
-|||
-|CLI|[az network public-ip prefix list](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-list) to list public IP addresses<br>[az network public-ip prefix show](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-show) to show settings<br> [az network public-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-update) to update<br>[az network public-ip prefix delete](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-delete) to delete|
-|PowerShell|[Get-AzPublicIpPrefix](/powershell/module/az.network/get-azpublicipprefix) to retrieve a public IP address object and view its settings<br>[Set-AzPublicIpPrefix](/powershell/module/az.network/set-azpublicipprefix) to update settings<br> [Remove-AzPublicIpPrefix](/powershell/module/az.network/remove-azpublicipprefix) to delete|
+| Tool | Command |
+| | |
+| CLI | [az network public-ip prefix list](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-list) to list public IP addresses. <br> [az network public-ip prefix show](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-show) to show settings. <br> [az network public-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-update) to update. <br> [az network public-ip prefix delete](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-delete) to delete. |
+| PowerShell |[Get-AzPublicIpPrefix](/powershell/module/az.network/get-azpublicipprefix) to retrieve a public IP address object and view its settings. <br> [Set-AzPublicIpPrefix](/powershell/module/az.network/set-azpublicipprefix) to update settings. <br> [Remove-AzPublicIpPrefix](/powershell/module/az.network/remove-azpublicipprefix) to delete. |
## Permissions
For permissions to manage public IP address prefixes, your account must be assig
| Microsoft.Network/publicIPPrefixes/read | Read a public IP address prefix | | Microsoft.Network/publicIPPrefixes/write | Create or update a public IP address prefix | | Microsoft.Network/publicIPPrefixes/delete | Delete a public IP address prefix |
-|Microsoft.Network/publicIPPrefixes/join/action | Create a public IP address from a prefix |
+| Microsoft.Network/publicIPPrefixes/join/action | Create a public IP address from a prefix |
## Next steps
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
description: Overview of upgrade options and guidance for migrating basic public
+ Last updated 09/19/2022
virtual-network Public Ip Upgrade Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-classic.md
Last updated 05/20/2021-+ # Migrate a classic reserved IP address to a public IP address
virtual-network Public Ip Upgrade Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-powershell.md
Last updated 10/28/2022-+ # Upgrade a public IP address using Azure PowerShell
In this article, you upgraded a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP address using PowerShell](./create-public-ip-powershell.md)
+- [Create a public IP address using PowerShell](./create-public-ip-powershell.md)
virtual-network Routing Preference Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-cli.md
+ Last updated 02/22/2021
You can associate the above created public IP address with a [Windows](../../vir
## Next steps - Learn more about [routing preference in public IP addresses](routing-preference-overview.md). -- [Configure routing preference for a VM using the Azure CLI](./configure-routing-preference-virtual-machine-cli.md).
+- [Configure routing preference for a VM using the Azure CLI](./configure-routing-preference-virtual-machine-cli.md).
virtual-network Virtual Network Deploy Static Pip Arm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-ps.md
Last updated 10/01/2021-+ # Create a virtual machine with a static public IP address using Azure PowerShell
Remove-AzResourceGroup -Name myResourceGroup -Force
- Learn more about [public IP addresses](public-ip-addresses.md#public-ip-addresses) in Azure. - Learn more about all [public IP address settings](virtual-network-public-ip-address.md#create-a-public-ip-address). - Learn more about [private IP addresses](private-ip-addresses.md) and assigning a [static private IP address](virtual-network-network-interface-addresses.md#add-ip-addresses) to an Azure virtual machine.-- Learn more about creating [Linux](../../virtual-machines/windows/tutorial-manage-vm.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Windows](../../virtual-machines/windows/tutorial-manage-vm.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machines.
+- Learn more about creating [Linux](../../virtual-machines/windows/tutorial-manage-vm.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Windows](../../virtual-machines/windows/tutorial-manage-vm.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machines.
virtual-network Virtual Networks Static Private Ip Classic Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-classic-ps.md
Last updated 03/22/2023 -+ # Configure private IP addresses for a virtual machine (Classic) using PowerShell
ItΓÇÖs recommended that you don't statically assign the private IP assigned to t
* Learn about [instance-level public IP (ILPIP)](/previous-versions/azure/virtual-network/virtual-networks-instance-level-public-ip) addresses.
-* Consult the [Reserved IP REST APIs](/previous-versions/azure/reference/dn722420(v=azure.100)).
+* Consult the [Reserved IP REST APIs](/previous-versions/azure/reference/dn722420(v=azure.100)).
virtual-network Migrate Classic Vnet Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/migrate-classic-vnet-powershell.md
Last updated 01/25/2022-+ # Migrate an Azure Virtual Network from classic to Resource Manager using Azure PowerShell
For more information on migrating resources in Azure from classic to Resource Ma
- [Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-overview.md). - [Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-faq.yml). - [Planning for migration of IaaS resources from classic to Azure Resource Manager](../virtual-machines/migration-classic-resource-manager-plan.md).-
virtual-network Move Across Regions Nsg Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-nsg-portal.md
Title: Move Azure network security group (NSG) to another Azure region - Azure p
description: Use Azure Resource Manager template to move Azure network security group from one Azure region to another using the Azure portal. + Last updated 08/31/2019
In this tutorial, you moved an Azure network security group from one region to a
- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
virtual-network Move Across Regions Nsg Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-nsg-powershell.md
Last updated 08/31/2019 -+ # Move Azure network security group (NSG) to another region using Azure PowerShell
virtual-network Move Across Regions Publicip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-publicip-powershell.md
Last updated 12/08/2021 -+ # Move Azure Public IP configuration to another region using Azure PowerShell
virtual-network Quickstart Create Nat Gateway Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-bicep.md
Last updated 04/08/2022 -+ # Customer intent: I want to create a NAT gateway using Bicep so that I can provide outbound connectivity for my virtual machines.
virtual-network Quickstart Create Nat Gateway Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-template.md
Last updated 10/27/2020 -+ # Customer intent: I want to create a NAT gateway by using an Azure Resource Manager template so that I can provide outbound connectivity for my virtual machines.
virtual-network Tutorial Dual Stack Outbound Nat Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer.md
Last updated 02/05/2023-+ # Tutorial: Configure dual stack outbound connectivity with a NAT gateway and a public load balancer
az group delete \
Advance to the next article to learn how to: > [!div class="nextstepaction"]
-> [Integrate NAT gateway in a hub and spoke network](tutorial-hub-spoke-route-nat.md)
+> [Integrate NAT gateway in a hub and spoke network](tutorial-hub-spoke-route-nat.md)
virtual-network Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/powershell-samples.md
+ Last updated 07/15/2019 - # Azure PowerShell samples for virtual network
virtual-network Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-bicep.md
Last updated 03/09/2023 -+ # Quickstart: Use Bicep templates to create a virtual network
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureLoadBalancer** | The Azure infrastructure load balancer. The tag translates to the [virtual IP address of the host](./network-security-groups-overview.md#azure-platform-considerations) (168.63.129.16) where the Azure health probes originate. This only includes probe traffic, not real traffic to your backend resource. If you're not using Azure Load Balancer, you can override this rule. | Both | No | No | | **AzureLoadTestingInstanceManagement** | This service tag is used for inbound connectivity from Azure Load Testing service to the load generation instances injected into your virtual network in the private load testing scenario. <br/><br/>**Note:** This tag is intended to be used in Azure Firewall, NSG, UDR and all other gateways for inbound connectivity. | Inbound | No | Yes | | **AzureMachineLearning** | Azure Machine Learning. | Both | No | Yes |
+| **AzureManagedGrafana** | Azure Managed Grafana instance endpoint. | Outbound | No | Yes |
| **AzureMonitor** | Log Analytics, Application Insights, AzMon, and custom metrics (GiG endpoints).<br/><br/>**Note**: For Log Analytics, the **Storage** tag is also required. If Linux agents are used, **GuestAndHybridManagement** tag is also required. | Outbound | No | Yes | | **AzureOpenDatasets** | Azure Open Datasets.<br/><br/>**Note**: This tag has a dependency on the **AzureFrontDoor.Frontend** and **Storage** tag. | Outbound | No | Yes | | **AzurePlatformDNS** | The basic infrastructure (default) DNS service.<br/><br/>You can use this tag to disable the default DNS. Be cautious when you use this tag. We recommend that you read [Azure platform considerations](./network-security-groups-overview.md#azure-platform-considerations). We also recommend that you perform testing before you use this tag. | Outbound | No | No |
The following AzureCloud tags don't have regional names formatted according to t
- AzureCloud.usstagee (EastUSSTG) - AzureCloud.usstagec (SouthCentralUSSTG) -
-> [!NOTE]
-> A subset of this information has been published in XML files for [Azure Public](https://www.microsoft.com/download/details.aspx?id=41653), [Azure China](https://www.microsoft.com/download/details.aspx?id=42064), and [Azure Germany](https://www.microsoft.com/download/details.aspx?id=54770). These XML downloads will be deprecated by June 30, 2020 and will no longer be available after that date. You should migrate to using the Discovery API or JSON file downloads as described in the previous sections.
- > [!TIP] > > - You can detect updates from one publication to the next by noting increased *changeNumber* values in the JSON file. Each subsection (for example, **Storage.WestUS**) has its own *changeNumber* that's incremented as changes occur. The top level of the file's *changeNumber* is incremented when any of the subsections is changed.
virtual-network Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/template-samples.md
Title: Azure Resource Manager template samples for virtual network description: Learn about different Azure Resource Manager templates available for you to deploy Azure virtual networks with.- - - Previously updated : 04/22/2019+ Last updated : 03/30/2023 # Azure Resource Manager template samples for virtual network
-The following table includes links to Azure Resource Manager template samples. You can deploy templates using the Azure [portal](../azure-resource-manager/templates/deploy-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json), Azure [CLI](../azure-resource-manager/templates/deploy-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json), or Azure [PowerShell](../azure-resource-manager/templates/deploy-powershell.md?toc=%2fazure%2fvirtual-network%2ftoc.json). To learn how to author your own templates, see [Create your first template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+The following table includes links to Azure Resource Manager template samples. You can deploy templates using the Azure [portal](../azure-resource-manager/templates/deploy-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json), Azure [CLI](../azure-resource-manager/templates/deploy-cli.md?toc=%2fazure%2fvirtual-network%2ftoc.json), or Azure [PowerShell](../azure-resource-manager/templates/deploy-powershell.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+
+To learn how to author your own templates, see [Create your first template](../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
For the JSON syntax and properties to use in templates, see [Microsoft.Network resource types](/azure/templates/microsoft.network/allversions).
For the JSON syntax and properties to use in templates, see [Microsoft.Network r
|[Create a virtual network service endpoint for Azure Storage](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/vnet-2subnets-service-endpoints-storage-integration)|Creates a new virtual network with two subnets, and a network interface in each subnet. Enables a service endpoint to Azure Storage for one of the subnets and secures a new storage account to that subnet.| |[Connect two virtual networks](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/vnet-to-vnet-peering)| Creates two virtual networks and a virtual network peering between them.| |[Create a virtual machine with multiple IP addresses](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-multiple-ipconfig)| Creates a Windows or Linux VM with multiple IP addresses.|
-|[Configure IPv4 + IPv6 dual stack virtual network](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/ipv6-in-vnet)|Deploys dual-stack (IPv4+IPv6) virtual network with two VMs and an Azure Basic Load Balancer with IPv4 and IPv6 public IP addresses. |
+|[Configure IPv4 + IPv6 dual stack virtual network](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/ipv6-in-vnet)|Deploys dual-stack (IPv4+IPv6) virtual network with two VMs and an Azure Basic Load Balancer with IPv4 and IPv6 public IP addresses. |
virtual-network Virtual Machine Network Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-machine-network-throughput.md
Title: Azure virtual machine network throughput description: Learn about Azure virtual machine network throughput, including how bandwidth is allocated to a virtual machine.-
-tags: azure-resource-manager
- Previously updated : 4/26/2019 Last updated : 03/30/2023 # Virtual machine network bandwidth
-Azure offers a variety of VM sizes and types, each with a different mix of performance capabilities. One capability is network throughput (or bandwidth), measured in megabits per second (Mbps). Because virtual machines are hosted on shared hardware, the network capacity must be shared fairly among the virtual machines sharing the same hardware. Larger virtual machines are allocated relatively more bandwidth than smaller virtual machines.
+Azure offers various VM sizes and types, each with a different mix of performance capabilities. One capability is network throughput (or bandwidth), measured in megabits per second (Mbps). Because virtual machines are hosted on shared hardware, the network capacity must be shared fairly among the virtual machines sharing the same hardware. Larger virtual machines are allocated relatively more bandwidth than smaller virtual machines.
-The network bandwidth allocated to each virtual machine is metered on egress (outbound) traffic from the virtual machine. All network traffic leaving the virtual machine is counted toward the allocated limit, regardless of destination. For example, if a virtual machine has a 1,000 Mbps limit, that limit applies whether the outbound traffic is destined for another virtual machine in the same virtual network, or outside of Azure.
+The network bandwidth allocated to each virtual machine is metered on egress (outbound) traffic from the virtual machine. All network traffic leaving the virtual machine is counted toward the allocated limit, regardless of destination. For example, if a virtual machine has a 1,000-Mbps limit, that limit applies whether the outbound traffic is destined for another virtual machine in the same virtual network, or outside of Azure.
-Ingress is not metered or limited directly. However, there are other factors, such as CPU and storage limits, which can impact a virtual machineΓÇÖs ability to process incoming data.
+Ingress isn't metered or limited directly. However, there are other factors, such as CPU and storage limits, which can affect a virtual machineΓÇÖs ability to process incoming data.
Accelerated networking is a feature designed to improve network performance, including latency, throughput, and CPU utilization. While accelerated networking can improve a virtual machineΓÇÖs throughput, it can do so only up to the virtual machineΓÇÖs allocated bandwidth. To learn more about Accelerated networking, see Accelerated networking for [Windows](create-vm-accelerated-networking-powershell.md) or [Linux](create-vm-accelerated-networking-cli.md) virtual machines.
Azure virtual machines must have one, but may have several, network interfaces a
## Expected network throughput
-Expected outbound throughput and the number of network interfaces supported by each VM size is detailed in Azure [Windows](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Linux](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) VM sizes. Select a type, such as General purpose, then select a size-series on the resulting page, such as the Dv2-series. Each series has a table with networking specifications in the last column titled,
+Expected outbound throughput and the number of network interfaces supported by each VM size is detailed in Azure [Windows](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Linux](../virtual-machines/sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json) VM sizes. Select a type, such as General purpose, then select a size and series on the resulting page, such as the Dv2-series. Each series has a table with networking specifications in the last column titled,
+ **Max NICs / Expected network performance (Mbps)**. The throughput limit applies to the virtual machine. Throughput is unaffected by the following factors:+ - **Number of network interfaces**: The bandwidth limit is cumulative of all outbound traffic from the virtual machine.-- **Accelerated networking**: Though the feature can be helpful in achieving the published limit, it does not change the limit.+
+- **Accelerated networking**: Though the feature can be helpful in achieving the published limit, it doesn't change the limit.
+ - **Traffic destination**: All destinations count toward the outbound limit.+ - **Protocol**: All outbound traffic over all protocols counts towards the limit.
-## Network Flow Limits
+## Network flow limits
+
+In addition to bandwidth, the number of network connections present on a VM at any given time can affect its network performance. The Azure networking stack maintains state for each direction of a TCP/UDP connection in data structures called ΓÇÿflowsΓÇÖ. A typical TCP/UDP connection has two flows created, one for the inbound and another for the outbound direction.
-In addition to bandwidth, the number of network connections present on a VM at any given time can affect its network performance. The Azure networking stack maintains state for each direction of a TCP/UDP connection in data structures called ΓÇÿflowsΓÇÖ. A typical TCP/UDP connection will have 2 flows created, one for the inbound and another for the outbound direction.
+Data transfer between endpoints requires creation of several flows in addition to flows that perform the data transfer. Some examples are flows created for DNS resolution and flows created for load balancer health probes. Network virtual appliances (NVAs) such as gateways, proxies, firewalls, see flows created for connections terminated at the appliance and originated by the appliance.
-Data transfer between endpoints requires creation of several flows in addition to those that perform the data transfer. Some examples are flows created for DNS resolution and flows created for load balancer health probes. Also note that network virtual appliances (NVAs) such as gateways, proxies, firewalls, will see flows being created for connections terminated at the appliance and originated by the appliance.
-![Flow count for TCP conversation through a forwarding appliance](media/virtual-machine-network-throughput/flow-count-through-network-virtual-appliance.png)
+## Flow limits and active connections recommendations
-## Flow Limits and Active Connections Recommendations
+Today, the Azure networking stack supports 1M total flows (500k inbound and 500k outbound) for a VM. Total active connections handled by a VM in different scenarios are as follows.
-Today, the Azure networking stack supports 1M total flows (500k inbound and 500k outbound) for a VM. Total active connections that can be handled by a VM in different scenarios are as follows.
-- VMs that belongs to VNET can handle 500k ***active connections*** for all VM sizes with 500k ***active flows in each direction***. -- VMs with network virtual appliances (NVAs) such as gateway, proxy, firewall can handle 250k ***active connections*** with 500k ***active flows in each direction*** due to the forwarding and additional new flow creation on new connection setup to the next hop as shown in the above diagram.
+- VMs that belong to a virtual network can handle 500k ***active connections*** for all VM sizes with 500k ***active flows in each direction***.
-Once this limit is hit, additional connections are dropped. Connection establishment and termination rates can also affect network performance as connection establishment and termination shares CPU with packet processing routines. We recommend that you benchmark workloads against expected traffic patterns and scale out workloads appropriately to match your performance needs.
+- VMs with network virtual appliances (NVAs) such as gateway, proxy, firewall can handle 250k ***active connections*** with 500k ***active flows in each direction*** due to the forwarding and more new flow creation on new connection setup to the next hop as shown in the above diagram.
-Metrics are available in [Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachines) to track the number of network flows and the flow creation rate on your VM or VMSS instances.
+Once this limit is hit, other connections are dropped. Connection establishment and termination rates can also affect network performance as connection establishment and termination shares CPU with packet processing routines. We recommend that you benchmark workloads against expected traffic patterns and scale out workloads appropriately to match your performance needs.
-![Screenshot shows the Metrics page of Azure Monitor with a line chart and totals for inbound and outbound flows.](media/virtual-machine-network-throughput/azure-monitor-flow-metrics.png)
+Metrics are available in [Azure Monitor](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachines) to track the number of network flows and the flow creation rate on your VM or Virtual Machine Scale Sets instances.
+ ## Next steps - [Optimize network throughput for a virtual machine operating system](virtual-network-optimize-network-bandwidth.md)+ - [Test network throughput](virtual-network-bandwidth-testing.md) for a virtual machine.
virtual-network What Is Ip Address 168 63 129 16 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/what-is-ip-address-168-63-129-16.md
Title: What is IP address 168.63.129.16? description: Learn about IP address 168.63.129.16, specifically that it's used to facilitate a communication channel to Azure platform resources.- -
-tags: azure-resource-manager
- Previously updated : 05/15/2019 Last updated : 03/30/2023 # What is IP address 168.63.129.16?
-IP address 168.63.129.16 is a virtual public IP address that is used to facilitate a communication channel to Azure platform resources. Customers can define any address space for their private virtual network in Azure. Therefore, the Azure platform resources must be presented as a unique public IP address. This virtual public IP address facilitates the following things:
+IP address 168.63.129.16 is a virtual public IP address that is used to facilitate a communication channel to Azure platform resources. Customers can define any address space for their private virtual network in Azure. Therefore, the Azure platform resources must be presented as a unique public IP address. This virtual public IP address facilitates the following operations:
- Enables the VM Agent to communicate with the Azure platform to signal that it is in a "Ready" state.-- Enables communication with the DNS virtual server to provide filtered name resolution to the resources (such as VM) that do not have a custom DNS server. This filtering makes sure that customers can resolve only the hostnames of their resources.-- Enables [health probes from Azure load balancer](../load-balancer/load-balancer-custom-probe-overview.md) to determine the health state of VMs.+
+- Enables communication with the DNS virtual server to provide filtered name resolution to the resources (such as VM) that don't have a custom DNS server. This filtering makes sure that customers can resolve only the hostnames of their resources.
+
+- Enables [health probes from Azure Load Balancer](../load-balancer/load-balancer-custom-probe-overview.md) to determine the health state of VMs.
+ - Enables the VM to obtain a dynamic IP address from the DHCP service in Azure.+ - Enables Guest Agent heartbeat messages for the PaaS role. > [!NOTE]
IP address 168.63.129.16 is a virtual public IP address that is used to facilita
## Scope of IP address 168.63.129.16
-The public IP address 168.63.129.16 is used in all regions and all national clouds. This special public IP address is owned by Microsoft and will not change. We recommend that you allow this IP address in any local (in the VM) firewall policies (outbound direction). The communication between this special IP address and the resources is safe because only the internal Azure platform can source a message from this IP address. If this address is blocked, unexpected behavior can occur in a variety of scenarios. 168.63.129.16 is a [virtual IP of the host node](./network-security-groups-overview.md#azure-platform-considerations) and as such it is not subject to user defined routes.
+The public IP address 168.63.129.16 is used in all regions and all national clouds. Microsoft owns this special public IP address and it doesn't change. We recommend that you allow this IP address in any local (in the VM) firewall policies (outbound direction). The communication between this special IP address and the resources is safe because only the internal Azure platform can source a message from this IP address. If this address is blocked, unexpected behavior can occur in various scenarios. 168.63.129.16 is a [virtual IP of the host node](./network-security-groups-overview.md#azure-platform-considerations) and as such it isn't subject to user defined routes.
-- The VM Agent requires outbound communication over ports 80/tcp and 32526/tcp with WireServer (168.63.129.16). These should be open in the local firewall on the VM. The communication on these ports with 168.63.129.16 is not subject to the configured network security groups.
+- The VM Agent requires outbound communication over ports 80/tcp and 32526/tcp with WireServer (168.63.129.16). These ports should be open in the local firewall on the VM. The communication on these ports with 168.63.129.16 isn't subject to the configured network security groups.
-- 168.63.129.16 can provide DNS services to the VM. If this is not desired, outbound traffic to 168.63.129.16 ports 53/udp and 53/tcp can be blocked in the local firewall on the VM.
+- 168.63.129.16 can provide DNS services to the VM. If DNS services provided by 168.63.129.16 isn't desired, outbound traffic to 168.63.129.16 ports 53/udp and 53/tcp can be blocked in the local firewall on the VM.
- By default DNS communication is not subject to the configured network security groups unless specifically targeted leveraging the [AzurePlatformDNS](../virtual-network/service-tags-overview.md#available-service-tags) service tag. To block DNS traffic to Azure DNS through NSG, create an outbound rule to deny traffic to [AzurePlatformDNS](../virtual-network/service-tags-overview.md#available-service-tags), and specify "Any" as "Source", "*" as "Destination port ranges", "Any" as protocol and "Deny" as action.
+ By default DNS communication isn't subject to the configured network security groups unless targeted using the [AzurePlatformDNS](../virtual-network/service-tags-overview.md#available-service-tags) service tag. To block DNS traffic to Azure DNS through NSG, create an outbound rule to deny traffic to [AzurePlatformDNS](../virtual-network/service-tags-overview.md#available-service-tags). Specify **"Any"** as **"Source"**, **"*"** as **"Destination port ranges"**, **"Any"** as protocol and **"Deny"** as action.
-- When the VM is part of a load balancer backend pool, [health probe](../load-balancer/load-balancer-custom-probe-overview.md) communication should be allowed to originate from 168.63.129.16. The default network security group configuration has a rule that allows this communication. This rule leverages the [AzureLoadBalancer](../virtual-network/service-tags-overview.md#available-service-tags) service tag. If desired this traffic can be blocked by configuring the network security group however this will result in probes that fail.
+- When the VM is part of a load balancer backend pool, [health probe](../load-balancer/load-balancer-custom-probe-overview.md) communication should be allowed to originate from 168.63.129.16. The default network security group configuration has a rule that allows this communication. This rule uses the [AzureLoadBalancer](../virtual-network/service-tags-overview.md#available-service-tags) service tag. If desired, this traffic can be blocked by configuring the network security group. The configuration of the block result in probes that fail.
## Troubleshoot connectivity+ > [!NOTE]
-> When running the tests below, the action need to be run as Administrator (Windows) and Root (Linux) to ensure accurate results.
+> When running the following tests, the action must be run as Administrator (Windows) and Root (Linux) to ensure accurate results.
### Windows OS+ You can test communication to 168.63.129.16 by using the following tests with PowerShell.
-```
+```powershell
Test-NetConnection -ComputerName 168.63.129.16 -Port 80 Test-NetConnection -ComputerName 168.63.129.16 -Port 32526 Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://168.63.129.16/?comp=versions ```
-Results should return as shown below.
-```
+Results should return as follows.
+
+```powershell
Test-NetConnection -ComputerName 168.63.129.16 -Port 80 ComputerName : 168.63.129.16 RemoteAddress : 168.63.129.16
SourceAddress : 10.0.0.4
TcpTestSucceeded : True ```
-```
+```powershell
Test-NetConnection -ComputerName 168.63.129.16 -Port 32526 ComputerName : 168.63.129.16 RemoteAddress : 168.63.129.16
SourceAddress : 10.0.0.4
TcpTestSucceeded : True ```
-```
+```powershell
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://168.63.129.16/?comp=versions xml Versions -- version="1.0" encoding="utf-8" Versions ```
-You can also test communication to 168.63.129.16 by using telnet or psping.
-If successful, telnet should connect and the file that is created will be empty.
+You can also test communication to 168.63.129.16 by using `telnet` or `psping`.
-```
+If successful, telnet should connect and the file that is created is empty.
+
+```powershell
telnet 168.63.129.16 80 >> C:\<<EDIT-DIRECTORY>>\168-63-129-16_test-port80.txt telnet 168.63.129.16 32526 >> C:\<<EDIT-DIRECTORY>>\168-63-129-16_test--port32526.txt ```
-```
+```powershell
Psping 168.63.129.16:80 >> C:\<<EDIT-DIRECTORY>>\168-63-129-16_test--port80.txt Psping 168.63.129.16:32526 >> C:\<<EDIT-DIRECTORY>>\168-63-129-16_test-port32526.txt ```+ ### Linux OS+ On Linux, you can test communication to 168.63.129.16 by using the following tests.
-```
+```bash
echo "Testing 80 168.63.129.16 Port 80" > 168-63-129-16_test.txt traceroute -T -p 80 168.63.129.16 >> 168-63-129-16_test.txt echo "Testing 80 168.63.129.16 Port 32526" >> 168-63-129-16_test.txt
echo "Test 168.63.129.16 Versions" >> 168-63-129-16_test.txt
curl http://168.63.129.16/?comp=versions >> 168-63-129-16_test.txt ```
-Results inside 168-63-129-16_test.txt should return as shown below.
+Results inside 168-63-129-16_test.txt should return as follows.
-```
+```bash
traceroute -T -p 80 168.63.129.16 traceroute to 168.63.129.16 (168.63.129.16), 30 hops max, 60 byte packets 1 168.63.129.16 (168.63.129.16) 0.974 ms 1.085 ms 1.078 ms
traceroute to 168.63.129.16 (168.63.129.16), 30 hops max, 60 byte packets
traceroute -T -p 32526 168.63.129.16 traceroute to 168.63.129.16 (168.63.129.16), 30 hops max, 60 byte packets 1 168.63.129.16 (168.63.129.16) 0.883 ms 1.004 ms 1.010 ms
-
+ curl http://168.63.129.16/?comp=versions <?xml version="1.0" encoding="utf-8"?> <Versions>
curl http://168.63.129.16/?comp=versions
## Next steps - [Security groups](./network-security-groups-overview.md)+ - [Create, change, or delete a network security group](manage-network-security-group.md)
virtual-wan How To Virtual Hub Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-preference-powershell.md
description: Learn how to configure Virtual WAN virtual hub routing preference using Azure PowerShell. + Last updated 10/26/2022
virtual-wan Quickstart Any To Any Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/quickstart-any-to-any-template.md
Last updated 06/14/2022 -+ # Quickstart: Create an any-to-any configuration using an ARM template
virtual-wan Quickstart Route Shared Services Vnet Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/quickstart-route-shared-services-vnet-template.md
Last updated 03/03/2023 -+ # Quickstart: Route to shared services VNets using an ARM template
virtual-wan User Groups About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-about.md
description: Learn about using user groups to assign IP addresses from specific
Previously updated : 10/21/2022 Last updated : 03/31/2023 # About user groups and IP address pools for P2S User VPNs - Preview
-You can configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article describes the different configurations and parameters the Virtual WAN P2S VPN gateway uses to determine user groups and assign IP addresses.
+You can configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article describes the different configurations and parameters the Virtual WAN P2S VPN gateway uses to determine user groups and assign IP addresses. For configuration steps, see [Configure user groups and IP address pools for P2S User VPNs](user-groups-create.md).
-## Use cases
-
-Contoso corporation is composed of multiple functional departments, such as Finance, Human Resources and Engineering. Contoso uses Virtual WAN to allow remote workers (users) to connect to Azure Virtual WAN and access resources hosted on-premises or in a Virtual Network connected to the Virtual WAN hub.
-
-However, Contoso has internal security policies where users from the Finance department can only access certain databases and Virtual Machines and users from Human Resources have access to other sensitive applications.
+This article covers the following concepts:
-Contoso can configure different user groups for each of their functional departments. This will ensure users from each department are assigned IP addresses from a department-level pre-defined address pool.
-
-Contoso's network administrator can then configure Firewall rules, network security groups (NSG) or access control lists (ACLs) to allow or deny certain users access to resources based on their IP addresses.
+* Server configuration concepts
+ * User groups
+ * Group members
+ * Default policy group
+ * Group priority
+ * Available group settings
+* Gateway concepts
+* Configuration requirements and limitations
+* Use cases
## Server configuration concepts
For every P2S VPN server configuration, one group must be selected as default. U
### Group priority
-Each group is also assigned a numerical priority. Groups with lower priority are evaluated first. This means that if a user presents credentials that match the settings of multiple groups, they'll be considered part of the group with the lowest priority. For example, if user A presents a credential that corresponds to the IT Group (priority 3) and Finance Group (priority 4), user A will be considered part of the IT Group for purposes of assigning IP addresses.
+Each group is also assigned a numerical priority. Groups with lower priority are evaluated first. This means that if a user presents credentials that match the settings of multiple groups, they're considered part of the group with the lowest priority. For example, if user A presents a credential that corresponds to the IT Group (priority 3) and Finance Group (priority 4), user A is considered part of the IT Group for purposes of assigning IP addresses.
### Available group settings The following section describes the different parameters that can be used to define which groups members are a part of. The available parameters vary based on selected authentication methods.
-The table below summarizes the available setting types and acceptable values. For more detailed information on each type of Member Value, view the section corresponding to your authentication type.
+The following table summarizes the available setting types and acceptable values. For more detailed information on each type of Member Value, view the section corresponding to your authentication type.
|Authentication type|Member type |Member values|Example member value| |||||
Azure Active Directory|AADGroupID|Azure Active Directory Group Object ID |0cf484
Gateways using Azure Active Directory authentication can use **Azure Active Directory Group Object IDs** to determine which user group a user belongs to. If a user is part of multiple Azure Active Directory groups, they're considered to be part of the Virtual WAN user group that has the lowest numerical priority.
-However, if you plan to have users who are external (users who are not part of the Azure Active Directory domain configured on the VPN Gateway) connect to the Virtual WAN Point-to-site VPN Gateway, please make sure that the user type of the external user is "Member" and **not** "Guest". Also, make sure that the "Name" of the user is set to the user's email address. If the user type and name of the connecting user is not set correctly as described above or you cannot set an external member to be a "Member" of your Azure Active Directory domain, that connecting user will be assigned to the default group and assigned an IP from the default IP address pool.
+However, if you plan to have users who are external (users who aren't part of the Azure Active Directory domain configured on the VPN gateway) connect to the Virtual WAN Point-to-site VPN gateway, make sure that the user type of the external user is "Member" and **not** "Guest". Also, make sure that the "Name" of the user is set to the user's email address. If the user type and name of the connecting user isn't set correctly as described above or you can't set an external member to be a "Member" of your Azure Active Directory domain, that connecting user will be assigned to the default group and assigned an IP from the default IP address pool.
-You can also identify whether or not a user is external by looking at the user's "User Principal Name." External users will have **#EXT** in their "User Principal Name."
+You can also identify whether or not a user is external by looking at the user's "User Principal Name." External users have **#EXT** in their "User Principal Name."
:::image type="content" source="./media/user-groups-about/groups.png" alt-text="Screenshot of an Azure Active Directory group." lightbox="./media/user-groups-about/groups.png":::
The following result is:
## Configuration considerations
+This section lists configuration requirements and limitations for user groups and IP address pools.
++
+## Use cases
+
+Contoso corporation is composed of multiple functional departments, such as Finance, Human Resources and Engineering. Contoso uses Virtual WAN to allow remote workers (users) to connect to Azure Virtual WAN and access resources hosted on-premises or in a Virtual Network connected to the Virtual WAN hub.
+
+However, Contoso has internal security policies where users from the Finance department can only access certain databases and virtual machines, and users from Human Resources have access to other sensitive applications.
+
+* Contoso can configure different user groups for each of their functional departments. This ensures users from each department are assigned IP addresses from a department-level predefined address pool.
+
+* Contoso's network administrator can then configure Firewall rules, network security groups (NSG) or access control lists (ACLs) to allow or deny certain users access to resources based on their IP addresses.
## Next steps
-* To create User Groups, see [Create User Groups for P2S User VPN](user-groups-create.md).
+* To create User Groups, see [Create user groups for P2S User VPN](user-groups-create.md).
virtual-wan User Groups Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-create.md
description: Learn how to configure user groups and assign IP addresses from spe
Previously updated : 10/21/2022 Last updated : 03/31/2023 - # Configure user groups and IP address pools for P2S User VPNs - Preview
-You can configure P2S User VPNs to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article helps you configure user groups, group members, and prioritize groups. For more information about working with user groups, see [About user groups](user-groups-about.md).
+P2S User VPNs provide the capability to assign users IP addresses from specific address pools based on their identity or authentication credentials by creating **User Groups**. This article helps you configure user groups, group members, and prioritize groups. For more information about working with user groups, see [About user groups](user-groups-about.md).
+
+## Prerequisites
+
+Before beginning, make sure you've configured a virtual WAN that uses one or more authentication methods. For steps, see [Tutorial: Create a Virtual WAN User VPN P2S connection](virtual-wan-point-to-site-portal.md).
+
+## Workflow
+
+This article uses the following workflow to help you set up user groups and IP address pools for your P2S VPN connection.
+
+1. Consider configuration requirements
+
+1. Choose an authentication mechanism
-## Configuration considerations
+1. Create a User Group
+1. Configure gateway settings
-### Additional configuration information
+## Step 1: Consider configuration requirements
-#### Azure Active Directory groups
+This section lists configuration requirements and limitations for user groups and IP address pools.
++
+## Step 2: Choosing authentication mechanism
+
+The following sections list available authentication mechanisms that can be used while creating user groups.
+
+### Azure Active Directory groups
To create and manage Active Directory groups, see [Manage Azure Active Directory groups and group membership](../active-directory/fundamentals/how-to-manage-groups.md). * The Azure Active Directory group object ID (and not the group name) needs to be specified as part of the Virtual WAN point-to-site User VPN configuration. * Azure Active Directory users can be assigned to be part of multiple Active Directory groups, but Virtual WAN considers users to be part of the Virtual WAN user/policy group that has the lowest numerical priority.
-#### RADIUS - NPS vendor-specific attributes
+### RADIUS - NPS vendor-specific attributes
For Network Policy Server (NPS) vendor-specific attributes configuration information, see [RADIUS - configure NPS for vendor-specific attributes](user-groups-radius.md).
-#### Generating self-signed certificates
+### Certificates
To generate self-signed certificates, see [Generate and export certificates for User VPN P2S connections: PowerShell](certificates-point-to-site.md). To generate a certificate with a specific Common Name, change the **Subject** parameter to the appropriate value (example, xx@domain.com) when running the `New-SelfSignedCertificate` PowerShell command.
-## Prerequisites
-
-Before beginning, make sure you've configured a virtual WAN that uses one or more authentication methods. For steps, see [Tutorial: Create a Virtual WAN User VPN P2S connection](virtual-wan-point-to-site-portal.md).
+## Step 3: Create a user group
-## Create a user group
+Use the following steps to create a user group.
1. In the Azure portal, go to your **Virtual WAN -> User VPN configurations** page.
-1. On the **User VPN configurations** page, select the User VPN Configuration that you want to edit, then click **Edit configuration**.
+1. On the **User VPN configurations** page, select the User VPN Configuration that you want to edit, then select **Edit configuration**.
1. On the **Edit User VPN configuration** page, open the **User Groups** tab.
Before beginning, make sure you've configured a virtual WAN that uses one or mor
1. To begin creating a new User Group, fill out the name parameter with the name of the first group.
-1. Next to the **Group Name**, click **Configure Group** to open the **Configure Group Settings** page.
+1. Next to the **Group Name**, select **Configure Group** to open the **Configure Group Settings** page.
:::image type="content" source="./media/user-groups-create/new-group.png" alt-text="Screenshot of creating a new group." lightbox="./media/user-groups-create/new-group.png":::
Before beginning, make sure you've configured a virtual WAN that uses one or mor
:::image type="content" source="./media/user-groups-create/group-members.png" alt-text="Screenshot of configuring values for User Group members." lightbox="./media/user-groups-create/group-members.png":::
-1. When you're finished creating the settings for the group, click **Add** and **Okay**.
+1. When you're finished creating the settings for the group, select **Add** and **Okay**.
1. Create any additional groups.
Before beginning, make sure you've configured a virtual WAN that uses one or mor
:::image type="content" source="./media/user-groups-create/select-default.png" alt-text="Screenshot of selecting the default group." lightbox="./media/user-groups-create/select-default.png":::
-1. Click the arrows to adjust the group priority order.
+1. Select the arrows to adjust the group priority order.
:::image type="content" source="./media/user-groups-create/adjust-order.png" alt-text="Screenshot of adjusting the priority order." lightbox="./media/user-groups-create/adjust-order.png":::
-1. Click **Review + create** to create and configure. After you create the User VPN configuration, configure the gateway server configuration settings to use the user groups feature.
+1. Select **Review + create** to create and configure. After you create the User VPN configuration, configure the gateway server configuration settings to use the user groups feature.
-## Configure gateway settings
+## Step 4: Configure gateway settings
-1. In the portal, go to your virtual hub and click **User VPN (Point to site)**.
+1. In the portal, go to your virtual hub and select **User VPN (Point to site)**.
-1. On the point to site page, click the **Gateway scale units** link to open the **Edit User VPN gateway**. Adjust the **Gateway scale units** value from the dropdown to determine gateway throughput.
+1. On the point to site page, select the **Gateway scale units** link to open the **Edit User VPN gateway**. Adjust the **Gateway scale units** value from the dropdown to determine gateway throughput.
-1. For **Point to site server configuration**, select the User VPN configuration that you configured for user groups. If you haven't yet configured these settings, see [Create user groups](user-groups-create.md).
+1. For **Point to site server configuration**, select the User VPN configuration that you configured for user groups. If you haven't yet configured these settings, see [Create a user group](#step-3-create-a-user-group).
1. Create a new point to site configuration by typing a new **Configuration Name**. 1. Select one or more groups to be associated with this configuration. All the users who are part of groups that are associated with this configuration will be assigned IP addresses from the same IP address pools.
Before beginning, make sure you've configured a virtual WAN that uses one or mor
:::image type="content" source="./media/user-groups-create/select-groups.png" alt-text="Screenshot of Edit User VPN gateway page with groups selected." lightbox="./media/user-groups-create/select-groups.png":::
-1. For **Address Pools**, click **Configure** to open the **Specify Address Pools** page. On this page, associate new address pools with this configuration. Users who are members of groups associated to this configuration will be assigned IP addresses from the specified pools. Based on the number of **Gateway Scale Units** associated to the gateway, you may need to specify more than one address pool. Click **Add** and **Okay** to save your address pools.
+1. For **Address Pools**, select **Configure** to open the **Specify Address Pools** page. On this page, associate new address pools with this configuration. Users who are members of groups associated to this configuration will be assigned IP addresses from the specified pools. Based on the number of **Gateway Scale Units** associated to the gateway, you may need to specify more than one address pool. Select **Add** and **Okay** to save your address pools.
:::image type="content" source="./media/user-groups-create/address-pools.png" alt-text="Screenshot of Specify Address Pools page." lightbox="./media/user-groups-create/address-pools.png":::
-1. You'll need one configuration for each set of groups that should be assigned IP addresses from different address pools. Repeat the steps to create more configurations. See [Configuration considerations](#configuration-considerations) for requirements and limitations regarding address pools and groups.
+1. You need one configuration for each set of groups that should be assigned IP addresses from different address pools. Repeat the steps to create more configurations. See [Step 1](#step-1-consider-configuration-requirements) for requirements and limitations regarding address pools and groups.
-1. After you've created the configurations that you need, click **Edit**, and then **Confirm** to save your settings.
+1. After you've created the configurations that you need, select **Edit**, and then **Confirm** to save your settings.
:::image type="content" source="./media/user-groups-create/confirm.png" alt-text="Screenshot of Confirm settings." lightbox="./media/user-groups-create/confirm.png"::: ## Troubleshooting
-1. Wireshark or another packet capture can be run in NPS mode and decrypt packets using shared key. You can validate packets are being sent from your RADIUS server to the point-to-site VPN gateway with the right RADIUS VSA configured.
-1. Set up and check NPS Event logging for authentication whether or not users are matching policies.
-1. Every address pool specified on the gateway. Address pools are split into two address pools and assigned to each active-active instance in a point-to-site VPN gateway pair. These split addresses should show up in the effective route table. For example, if you specify 10.0.0.0/24, you should see two /25 routes in the effective route table. If this isn't the case, try changing the address pools defined on the gateway.
-1. Make sure all point-to-site VPN connection configurations are associated to the defaultRouteTable and propagate to the same set of route tables. This should be configured automatically if you're using portal, but if you're using REST, PowerShell or CLI, make sure all propagations and associations are set appropriately.
-1. If you're using the Azure VPN client, make sure the Azure VPN client installed on user devices are the latest version.
-1. If you're using Azure Active Directory authentication, please make sure the tenant URL input in the server configuration (`https://login.microsoftonline.com/<tenant ID>`) does **not** end in a `\`. If the URL is input to end with `\`, the Gateway will not be able to properly process Azure Active Directory user groups and all users will be assigned to the default group. To remediate, please modify the server configuration to remove the trailing `\` and modify the address pools configured on the gateway to apply the changes to the gateway. This is a known issue that will be fixed in a later relase.
-1. If you're using Azure Active Directory authentication and you plan to invite users who are external (users who are not part of the Azure Active Directory domain configured on the VPN Gateway) to connect to the Virtual WAN Point-to-site VPN Gateway, please make sure that the user type of the external user is "Member" and not "Guest". Also, make sure that the "Name" of the user is set to the user's email address. If the user type and name of the connecting user is not set correctly as described above or you cannot set an external member to be a "Member" of your Azure Active Directory domain, that connecting user will be assigned to the default group and assigned an IP from the default IP address pool.
+1. **Verify packets have the right attributes?**: Wireshark or another packet capture can be run in NPS mode and decrypt packets using shared key. You can validate packets are being sent from your RADIUS server to the point-to-site VPN gateway with the right RADIUS VSA configured.
+1. **Are users getting wrong IP assigned?**: Set up and check NPS Event logging for authentication whether or not users are matching policies.
+1. **Having issues with address pools?** Every address pool is specified on the gateway. Address pools are split into two address pools and assigned to each active-active instance in a point-to-site VPN gateway pair. These split addresses should show up in the effective route table. For example, if you specify "10.0.0.0/24", you should see two "/25" routes in the effective route table. If this isn't the case, try changing the address pools defined on the gateway.
+1. **P2S client not able to receive routes?** Make sure all point-to-site VPN connection configurations are associated to the defaultRouteTable and propagate to the same set of route tables. This should be configured automatically if you're using portal, but if you're using REST, PowerShell or CLI, make sure all propagations and associations are set appropriately.
+1. **Not able to enable Multipool using Azure VPN client?** If you're using the Azure VPN client, make sure the Azure VPN client installed on user devices is the latest version. You need to download the client again to enable this feature.
+1. **All users getting assigned to Default group?** If you're using Azure Active Directory authentication, make sure the tenant URL input in the server configuration `(https://login.microsoftonline.com/<tenant ID>)` doesn't end in a `\`. If the URL is input to end with `\`, the gateway won't be able to properly process Azure Active Directory user groups, and all users are assigned to the default group. To remediate, modify the server configuration to remove the trailing `\` and modify the address pools configured on the gateway to apply the changes to the gateway. This is a known issue.
+1. **Trying to invite external users to use Multipool feature?** If you're using Azure Active Directory authentication and you plan to invite users who are external (users who aren't part of the Azure Active Directory domain configured on the VPN gateway) to connect to the Virtual WAN Point-to-site VPN gateway, make sure that the user type of the external user is "Member" and not "Guest". Also, make sure that the "Name" of the user is set to the user's email address. If the user type and name of the connecting user isn't set correctly as described above, or you can't set an external member to be a "Member" of your Azure Active Directory domain, the connecting user is assigned to the default group and assigned an IP from the default IP address pool.
+ ## Next steps * For more information about user groups, see [About user groups and IP address pools for P2S User VPNs](user-groups-about.md).
virtual-wan User Groups Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-radius.md
description: Learn how to configure RADIUS/NPS for user groups to assign IP addr
Previously updated : 10/21/2022 Last updated : 03/31/2023
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
[!INCLUDE [ExpressRoute Performance](../../includes/virtual-wan-expressroute-performance.md)]
-### <a name="update-router"></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
+### <a name="why-am-i-seeing-a-message-and-button-called-update-router-to-latest-software-version-in-portal."></a>Why am I seeing a message and button called "Update router to latest software version" in portal?
-The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. Azure-wide Cloud Services-based infrastructure is deprecating. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal.
+Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal. If the button is not visible, please open a support case.
YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks are not deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update.
vpn-gateway Point To Site Vpn Client Cert Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-windows.md
description: Learn how to configure VPN clients for P2S configurations that use certificate authentication. This article applies to Windows. + Last updated 02/03/2023
vpn-gateway Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/powershell-samples.md
description: Use these Azure PowerShell scripts for creating VPN gateways, creating site-to-site and VNet-to-VNet connections, and downloading VPN device templates. + Last updated 09/03/2020 - # Azure PowerShell samples for VPN Gateway
vpn-gateway Vpn Gateway About Forced Tunneling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-forced-tunneling.md
description: Learn how to configure forced tunneling for virtual networks created using the classic deployment model. + Last updated 02/07/2023 - # Configure forced tunneling using the classic deployment model
vpn-gateway Vpn Gateway Connect Different Deployment Models Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-connect-different-deployment-models-portal.md
Last updated 04/25/2022 --+ # Connect virtual networks from different deployment models using the portal
vpn-gateway Vpn Gateway Connect Different Deployment Models Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md
description: Learn how to connect classic VNets to Resource Manager VNets using
-+ Last updated 04/26/2022
vpn-gateway Vpn Gateway Delete Vnet Gateway Classic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-classic-powershell.md
description: Learn how to delete a virtual network gateway using PowerShell in t
+ Last updated 10/08/2020
vpn-gateway Vpn Gateway Howto Site To Site Classic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md
description: Learn how to create an IPsec connection between your on-premises network and a classic Azure virtual network over the public Internet. + Last updated 10/08/2020 - # Create a Site-to-Site connection using the Azure portal (classic)
For steps to change a gateway SKU, see [Resize a gateway SKU](vpn-gateway-about-
## Next steps * Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](../index.yml).
-* For information about Forced Tunneling, see [About Forced Tunneling](vpn-gateway-about-forced-tunneling.md).
+* For information about Forced Tunneling, see [About Forced Tunneling](vpn-gateway-about-forced-tunneling.md).
vpn-gateway Vpn Gateway Howto Vnet Vnet Portal Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-portal-classic.md
-
+ Title: 'Create a connection between VNets: classic: Azure portal' description: Learn how to connect classic Azure virtual networks together using PowerShell and the Azure portal. + Last updated 10/15/2020 - # Configure a VNet-to-VNet connection (classic)
vpn-gateway Vpn Gateway Modify Local Network Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-modify-local-network-gateway-cli.md
description: Learn how to change IP address prefixes for your local network gateway using the Azure CLI. + Last updated 10/28/2021 - # Modify local network gateway settings using the Azure CLI
Install the latest version of the CLI commands (2.0 or later). For information a
## Next steps
-You can verify your gateway connection. See [Verify a gateway connection](vpn-gateway-verify-connection-resource-manager.md).
+You can verify your gateway connection. See [Verify a gateway connection](vpn-gateway-verify-connection-resource-manager.md).
vpn-gateway Vpn Gateway Multi Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-multi-site.md
description: Learn how to connect multiple on-premises sites to a classic virtua
+ Last updated 09/03/2020 - # Add a Site-to-Site connection to a VNet with an existing VPN gateway connection (classic)
Example return:
## Next steps
-To learn more about VPN Gateways, see [About VPN Gateways](vpn-gateway-about-vpngateways.md).
+To learn more about VPN Gateways, see [About VPN Gateways](vpn-gateway-about-vpngateways.md).
vpn-gateway Vpn Gateway Verify Connection Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-verify-connection-resource-manager.md
description: Learn how to verify a virtual network VPN Gateway connection. + Last updated 06/13/2022 - # Verify a connection for VPN Gateway
To verify your VPN gateway connection for the classic deployment model using Pow
## Next steps
-* You can add virtual machines to your virtual networks. See [Create a Virtual Machine](../virtual-machines/windows/quick-create-portal.md) for steps.
+* You can add virtual machines to your virtual networks. See [Create a Virtual Machine](../virtual-machines/windows/quick-create-portal.md) for steps.
web-application-firewall Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/resource-manager-template-samples.md
description: Azure Resource Manager templates for Azure Front Door Web Applicati
+ Last updated 08/16/2022
web-application-firewall Waf Front Door Exclusion Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-exclusion-configure.md
description: Learn how to configure a WAF exclusion list for an existing Front D
-+ Last updated 10/18/2022
web-application-firewall Waf Front Door Policy Configure Bot Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-policy-configure-bot-protection.md
Title: Configure bot protection for Web Application Firewall with Azure Front Do
description: Learn how to configure bot protection rule in Azure Web Application Firewall (WAF) for Front Door by using Azure portal. + Last updated 11/10/2022
web-application-firewall Waf Front Door Rate Limit Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit-configure.md
Last updated 10/05/2022 -+ zone_pivot_groups: web-application-firewall-configuration
web-application-firewall Application Gateway Customize Waf Rules Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-customize-waf-rules-cli.md
description: This article provides information on how to customize Web Applicati
+ Last updated 11/14/2019
web-application-firewall Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/powershell-samples.md
description: Azure PowerShell examples for Azure Application Gateway
+ Last updated 09/30/2019 - # Azure PowerShell script examples for Azure Application Gateway
The following table includes links to Azure PowerShell script examples for Azure
| Example | Description | | - | -- |
-|[WAF v2 custom rules](../scripts/waf-custom-rules-powershell.md)|Creates an Application Gateway Web Application Firewall v2 with custom rules. |
+|[WAF v2 custom rules](../scripts/waf-custom-rules-powershell.md)|Creates an Application Gateway Web Application Firewall v2 with custom rules. |
web-application-firewall Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/quick-create-bicep.md
Last updated 06/22/2022 -+ # Quickstart: Create an Azure WAF v2 on Application Gateway using Bicep
web-application-firewall Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/quick-create-template.md
Last updated 09/20/2022 -+ # Customer intent: As a cloud administrator, I want to quickly deploy a Web Application Firewall v2 on Azure Application Gateway for production environments or to evaluate WAF v2 functionality.
web-application-firewall Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/resource-manager-template-samples.md
description: Azure Resource Manager templates for Azure Web Application Firewall
+ Last updated 09/28/2019 - # Azure Resource Manager templates for Azure Application Gateway and Web Application Firewall
The following table includes links to Azure Resource Manager templates for Azure
| Template | Description | | -- | -- |
-| [Application Gateway v2 with Web Application Firewall](https://azure.microsoft.com/resources/templates/ag-docs-wafv2/) | Creates an Application Gateway v2 with Web Application Firewall v2.|
+| [Application Gateway v2 with Web Application Firewall](https://azure.microsoft.com/resources/templates/ag-docs-wafv2/) | Creates an Application Gateway v2 with Web Application Firewall v2.|