Updates from: 02/24/2023 02:13:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Previously updated : 02/22/2023 Last updated : 02/23/2023
Use the general guidelines when implementing a SCIM endpoint to ensure compatibi
### /Schemas (Schema discovery): * [Sample request/response](#schema-discovery)
-* Schema discovery isn't currently supported on the custom non-gallery SCIM application, but it's being used on certain gallery applications. Going forward, schema discovery will be used as the sole method to add more attributes to the schema of an existing gallery SCIM application.
+* Schema discovery is being used on certain gallery applications. Schema discovery is the sole method to add more attributes to the schema of an existing gallery SCIM application. Schema discovery isn't currently supported on custom non-gallery SCIM application.
* If a value isn't present, don't send null values. * Property values should be camel cased (for example, readWrite). * Must return a list response.
The SCIM spec doesn't define a SCIM-specific scheme for authentication and autho
|--|--|--|--| |Username and password (not recommended or supported by Azure AD)|Easy to implement|Insecure - [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984)|Not supported for new gallery or non-gallery apps.| |Long-lived bearer token|Long-lived tokens don't require a user to be present. They're easy for admins to use when setting up provisioning.|Long-lived tokens can be hard to share with an admin without using insecure methods such as email. |Supported for gallery and non-gallery apps. |
-|OAuth authorization code grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid, and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.|
-|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be automated, and new tokens can be silently requested without user interaction. ||Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth client credentials grant on non-gallery is in our backlog.|
+|OAuth authorization code grant|Access tokens have a shorter life than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid, and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.|
+|OAuth client credentials grant|Access tokens have a shorter life than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be automated, and new tokens can be silently requested without user interaction. ||Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth client credentials grant on non-gallery is in our backlog.|
> [!NOTE] > It's not recommended to leave the token field blank in the Azure AD provisioning configuration custom app UI. The token generated is primarily available for testing purposes.
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
There are multiple scenarios that organizations can now enable using filter for
Filter for devices is an option when creating a Conditional Access policy in the Azure portal or using the Microsoft Graph API.
-> [!IMPORTANT]
-> Device state and filter for devices cannot be used together in Conditional Access policy.
- The following steps will help create two Conditional Access policies to support the first scenario under [Common scenarios](#common-scenarios). Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant.
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md
When user risk is detected, administrators can employ the user risk policy condi
When a user is prompted to change a password, they'll first be required to complete multifactor authentication. Make sure all users have registered for multifactor authentication, so they're prepared in case risk is detected for their account. > [!WARNING]
-> Users must have previously registered for self-service password reset before triggering the user risk policy.
+> Users must have previously registered for multifactor authentication before triggering the user risk policy.
The following restrictions apply when you configure a policy by using the password change control:
active-directory Concept Conditional Access Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policies.md
The software the user is employing to access the cloud app. For example, 'Browse
The behavior of the client apps condition was updated in August 2020. If you have existing Conditional Access policies, they'll remain unchanged. However, if you select on an existing policy, the configure toggle has been removed and the client apps the policy applies to are selected.
-#### Device state
-
-This control is used to exclude devices that are hybrid Azure AD joined, or marked a compliant in Intune. This exclusion can be done to block unmanaged devices.
- #### Filter for devices This control allows targeting specific devices based on their attributes in a policy.
active-directory Howto Conditional Access Session Lifetime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md
Sign-in frequency previously applied to only to the first factor authentication
### User sign-in frequency and device identities
-On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](/active-directory/devices/concept-azure-ad-register), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](../develop/scenario-desktop-acquire-token-wam.md) plugin can refresh a PRT during native application authentication using WAM.
+On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](/azure/active-directory/devices/concept-azure-ad-register), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](../develop/scenario-desktop-acquire-token-wam.md) plugin can refresh a PRT during native application authentication using WAM.
Note: The timestamp captured from user log-in is not necessarily the same as the last recorded timestamp of PRT refresh because of the 4-hour refresh cycle. The case when it is the same is when a PRT has expired and a user log-in refreshes it for 4 hours. In the following examples, assume SIF policy is set to 1 hour and PRT is refreshed at 00:00.
We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more o
## Next steps
-* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
+* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md).
active-directory Resilience Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md
When resilience defaults are disabled, the Backup Authentication Service won't u
## Testing resilience defaults
-It isn't possible to conduct a dry run using the Backup Authentication Service or simulate the result of a policy with resilience defaults enabled or disabled at this time. Azure AD will conduct monthly exercises using the Backup Authentication Service. The sign-in logs will display if the Backup Authentication Service was used to issue the access token.
+It isn't possible to conduct a dry run using the Backup Authentication Service or simulate the result of a policy with resilience defaults enabled or disabled at this time. Azure AD will conduct monthly exercises using the Backup Authentication Service. The sign-in logs will display if the Backup Authentication Service was used to issue the access token. In **Azure portal** > **Monitoring** > **Sign-in Logs** blade, you can add the filter "Token issuer type == Azure AD Backup Auth" to display the logs processed by Azure AD Backup Authentication service.
## Configuring resilience defaults
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
These differences make workload identities harder to manage and put them at high
> [!IMPORTANT] > Workload Identities Premium licenses are required to create or modify Conditional Access policies scoped to service principals.
-> In directories without appropriate licenses, Conditional Access policies created prior to the release of Workload Identities Premium will be available for deletion only.
+> In directories without appropriate licenses, existing Conditional Access policies for workload identities will continue to function, but can't be modified. For more information see [Microsoft Entra Workload Identities](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-identities#office-StandaloneSKU-k3hubfz).  
> [!NOTE] > Policy can be applied to single tenant service principals that have been registered in your tenant. Third party SaaS and multi-tenanted apps are out of scope. Managed identities are not covered by policy.
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Now that you've created the VM, you need to configure an Azure RBAC policy to de
To allow a user to log in to the VM over RDP, you must assign the Virtual Machine Administrator Login or Virtual Machine User Login role to the resource group that contains the VM and its associated virtual network, network interface, public IP address, or load balancer resources.
+> [!NOTE]
+> Manually elevating a user to become a local administrator on the VM by adding the user to a member of the local administrators group or by running `net localgroup administrators /add "AzureAD\UserUpn"` command is not supported. You need to use Azure roles above to authorize VM login.
+ An Azure user who has the Owner or Contributor role assigned for a VM does not automatically have privileges to log in to the VM over RDP. The reason is to provide audited separation between the set of people who control virtual machines and the set of people who can access virtual machines. There are two ways to configure role assignments for a VM:
active-directory B2b Government National Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-government-national-clouds.md
Previously updated : 05/17/2022 Last updated : 02/14/2023
# Azure AD B2B in government and national clouds
-Microsoft Azure [national clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration isn't enabled by default across national cloud boundaries, but you can use Microsoft cloud settings (preview) to establish mutual B2B collaboration between the following Microsoft Azure clouds:
+Microsoft Azure [national clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration isn't enabled by default across national cloud boundaries, but you can use Microsoft cloud settings to establish mutual B2B collaboration between the following Microsoft Azure clouds:
- Microsoft Azure global cloud and Microsoft Azure Government - Microsoft Azure global cloud and Microsoft Azure China 21Vianet ## B2B collaboration across Microsoft clouds
-To set up B2B collaboration between tenants in different clouds, both tenants need to configure their Microsoft cloud settings to enable collaboration with the other cloud. Then each tenant must configure inbound and outbound cross-tenant access with the tenant in the other cloud. For details, see [Microsoft cloud settings (preview)](cross-cloud-settings.md).
+To set up B2B collaboration between tenants in different clouds, both tenants need to configure their Microsoft cloud settings to enable collaboration with the other cloud. Then each tenant must configure inbound and outbound cross-tenant access with the tenant in the other cloud. For details, see [Microsoft cloud settings](cross-cloud-settings.md).
## B2B collaboration within the Microsoft Azure Government cloud
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
Previously updated : 06/30/2022 Last updated : 02/14/2023
-# Configure Microsoft cloud settings for B2B collaboration (Preview)
-
-> [!NOTE]
-> Microsoft cloud settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Configure Microsoft cloud settings for B2B collaboration
When Azure AD organizations in separate Microsoft Azure clouds need to collaborate, they can use Microsoft cloud settings to enable Azure AD B2B collaboration. B2B collaboration is available between the following global and sovereign Microsoft Azure clouds:
In your Microsoft cloud settings, enable the Microsoft Azure cloud you want to c
1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service. 1. Select **External Identities**, and then select **Cross-tenant access settings**.
-1. Select **Microsoft cloud settings (Preview)**.
+1. Select **Microsoft cloud settings**.
1. Select the checkboxes next to the external Microsoft Azure clouds you want to enable. ![Screenshot showing Microsoft cloud settings.](media/cross-cloud-settings/cross-cloud-settings.png)
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
You can configure organization-specific settings by adding an organization and m
### Automatic redemption setting > [!IMPORTANT]
-> Automatic redemption is currently in PREVIEW.
+> Automatic redemption is currently in preview.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. [!INCLUDE [automatic-redemption-include](../includes/automatic-redemption-include.md)]
For more information, see [Configure cross-tenant synchronization](../multi-tena
### Cross-tenant synchronization setting > [!IMPORTANT]
-> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW.
+> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in preview.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. [!INCLUDE [cross-tenant-synchronization-include](../includes/cross-tenant-synchronization-include.md)]
To configure this setting using Microsoft Graph, see the [Update crossTenantIden
## Microsoft cloud settings
-> [!NOTE]
-> Microsoft cloud settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds: - Microsoft Azure commercial cloud and Microsoft Azure Government
To set up B2B collaboration, both organizations configure their Microsoft cloud
> [!NOTE] > B2B direct connect is not supported for collaboration with Azure AD tenants in a different Microsoft cloud.
-For configuration steps, see [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md).
+For configuration steps, see [Configure Microsoft cloud settings for B2B collaboration](cross-cloud-settings.md).
### Default settings in cross-cloud scenarios
To collaborate with a partner tenant in a different Microsoft Azure cloud, both
Several tools are available to help you identify the access your users and partners need before you set inbound and outbound access settings. To ensure you donΓÇÖt remove access that your users and partners need, you should examine current sign-in behavior. Taking this preliminary step will help prevent loss of desired access for your end users and partner users. However, in some cases these logs are only retained for 30 days, so we strongly recommend you speak with your business stakeholders to ensure required access isn't lost.
-> [!NOTE]
-> During the preview of Microsoft cloud settings, sign-in events for cross-cloud scenarios will be reported in the resource tenant, but not in the home tenant.
- ### Cross-tenant sign-in activity PowerShell script To review user sign-in activity associated with external tenants, use the [cross-tenant user sign-in activity](https://aka.ms/cross-tenant-signins-ps) PowerShell script. For example, to view all available sign-in events for inbound activity (external users accessing resources in the local tenant) and outbound activity (local users accessing resources in an external tenant), run the following command:
active-directory External Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md
Previously updated : 06/30/2022 Last updated : 02/14/2023
For more information, see [Cross-tenant access in Azure AD External Identities](
Azure AD has a new feature for multi-tenant organizations called cross-tenant synchronization (preview), which allows for a seamless collaboration experience across Azure AD tenants. Cross-tenant synchronization settings are configured under the **Organization-specific access settings**. To learn more about multi-tenant organizations and cross-tenant synchronization see the [Multi-tenant organizations documentation](../multi-tenant-organizations/index.yml).
-### Microsoft cloud settings for B2B collaboration (preview)
+### Microsoft cloud settings for B2B collaboration
Microsoft Azure cloud services are available in separate national clouds, which are physically isolated instances of Azure. Increasingly, organizations are finding the need to collaborate with organizations and users across global cloud and national cloud boundaries. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following Microsoft Azure clouds:
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
As of November 18, 2019, guest users in your directory (defined as user accounts
Within the Azure US Government cloud, B2B collaboration is enabled between tenants that are both within Azure US Government cloud and that both support B2B collaboration. If you invite a user in a tenant that doesn't yet support B2B collaboration, you'll get an error. For details and limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
-If you need to collaborate with an Azure AD organization that's outside of the Azure US Government cloud, you can use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to enable B2B collaboration.
+If you need to collaborate with an Azure AD organization that's outside of the Azure US Government cloud, you can use [Microsoft cloud settings](cross-cloud-settings.md) to enable B2B collaboration.
## Invitation is blocked due to cross-tenant access policies
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Previously updated : 08/30/2022 Last updated : 02/14/2023
B2B collaboration is enabled by default, but comprehensive admin settings let yo
- Use [external collaboration settings](external-collaboration-settings-configure.md) to define who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory. -- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](../../azure-government/index.yml) or [Microsoft Azure China 21Vianet](/azure/china).
+- Use [Microsoft cloud settings](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](../../azure-government/index.yml) or [Microsoft Azure China 21Vianet](/azure/china).
## Easily invite guest users from the Azure AD portal
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
Title: Convert local guests into Azure AD B2B guest accounts
-description: Learn how to convert local guests into Azure AD B2B guest accounts
+ Title: Convert local guest accounts to Azure AD B2B guest accounts
+description: Learn to convert local guests into Azure AD B2B guest accounts by identifying apps and local guest accounts, migration, and more.
Previously updated : 11/03/2022 Last updated : 02/22/2023
-# Convert local guests into Azure Active Directory B2B guest accounts
+# Convert local guest accounts to Azure Active Directory B2B guest accounts
-Azure Active Directory (Azure AD B2B) allows external users to collaborate using their own identities. However, it isn't uncommon for organizations to issue local usernames and passwords to external users. This approach isn't recommended as the bring-your-own-identity (BYOI) capabilities provided
-by Azure AD B2B to provide better security, lower cost, and reduce
-complexity when compared to local account creation. Learn more
-[here.](./secure-external-access-resources.md)
+With Azure Active Directory (Azure AD B2B), external users collaborate with their identities. Although organizations can issue local usernames and passwords to external users, this approach isn't recommended. Azure AD B2B has improved security, lower cost, and less complexity, compared to creating local accounts. In addition, if your organization issues local credentials that external users manage, you can use Azure AD B2B instead. Use the guidance in this document to make the transition.
-If your organization currently issues local credentials that external users have to manage and would like to migrate to using Azure AD B2B instead, this document provides a guide to make the transition as seamlessly as possible.
+Learn more: [Plan an Azure AD B2B collaboration deployment](secure-external-access-resources.md)
## Identify external-facing applications
-Before migrating local accounts to Azure AD B2B, admins should understand what applications and workloads these external users need to access. For example, if external users need access to an application that is hosted on-premises, admins will need to validate that the application is integrated with Azure AD and that a provisioning process is implemented to provision the user from Azure AD to the application.
-The existence and use of on-premises applications could be a reason why local accounts are created in the first place. Learn more about
-[provisioning B2B guests to on-premises
-applications.](../external-identities/hybrid-cloud-to-on-premises.md)
+Before migrating local accounts to Azure AD B2B, confirm the applications and workloads external users can access. For example, for applications hosted on-premises, validate the application is integrated with Azure AD. On-premises applications are a good reason to create local accounts.
-All external-facing applications should have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience.
+Learn more: [Grant B2B users in Azure AD access to your on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md)
+
+We recommend that external-facing applications have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience.
## Identify local guest accounts
-Admins will need to identify which accounts should be migrated to Azure AD B2B. External identities in Active Directory should be easily identifiable, which can be done with an attribute-value pair. For example, making ExtensionAttribute15 = `External` for all external users. If these users are being provisioned via Azure AD Connect or Cloud Sync, admins can optionally configure these synced external users
-to have the `UserType` attributes set to `Guest`. If these users are being
-provisioned as cloud-only accounts, admins can directly modify the
-users' attributes. What is most important is being able to identify the
-users who you want to convert to B2B.
+Identify the accounts to be migrated to Azure AD B2B. External identities in Active Directory are identifiable with an attribute-value pair. For example, making ExtensionAttribute15 = `External` for external users. If these users are set up with Azure AD Connect or Cloud Sync, configure synced external users to have the `UserType` attributes set to `Guest`. If the users are set up as cloud-only accounts, you can modify user attributes. Primarily, identify users to convert to B2B.
## Map local guest accounts to external identities
-Once you've identified which external user accounts you want to
-convert to Azure AD B2B, you need to identify the BYOI identities or external emails for each user. For example, admins will need to identify that the local account (v-Jeff@Contoso.com) is a user whose home identity/email address is Jeff@Fabrikam.com. How to identify the home identities is up to the organization, but some examples include:
--- Asking the external user's sponsor to provide the information.--- Asking the external user to provide the information.
+Identify user identities or external emails. Confirm that the local account (v-lakshmi@contoso.com) is a user with the home identity and email address: lakshmi@fabrikam.com. To identify home identities:
-- Referring to an internal database if this information is already known and stored by the organization.
+- The external user's sponsor provides the information
+- The external user provides the information
+- Refer to an internal database, if the information is known and stored
-Once the mapping of each external local account to the BYOI identity is done, admins will need to add the external identity/email to the user.mail attribute on each local account.
+After mapping external local accounts to identities, add external identities or email to the user.mail attribute on local accounts.
## End user communications
-External users should be notified that the migration will be taking place and when it will happen. Ensure you communicate the expectation that external users will stop using their existing password and post-migration will authenticate with their own home/corporate credentials going forward. Communications can include email campaigns, posters, and announcements.
+Notify external users about migration timing. Communicate expectations, such as when external users must stop using a current password to enable authenticate by home and corporate credentials. Communications can include email campaigns and announcements.
## Migrate local guest accounts to Azure AD B2B
-Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](../external-identities/invite-internal-users.md)
-This can be done in the UX or programmatically via PowerShell or the Microsoft Graph API. Once complete, the users will no longer
-authenticate with their local password, but will instead authenticate with their home identity/email that was populated in the user.mail attribute. You've successfully migrated to Azure AD B2B.
+After local accounts have user.mail attributes populated with the external identity and email, convert local accounts to Azure AD B2B by inviting the local account. You can use PowerShell or the Microsoft Graph API.
-## Post-migration considerations
+Learn more: [Invite internal users to B2B collaboration](../external-identities/invite-internal-users.md)
-If local accounts for external users were being synced from on-premises, admins should take steps to reduce their on-premises footprint and use cloud-native B2B guest accounts moving forward. Some possible actions can include:
+## Post-migration considerations
-- Transition existing local accounts for external users to Azure AD B2B and stop creating local accounts. Post-migration, admins should invite external users natively in Azure AD.
+If external user local accounts were synced from on-premises, reduce their on-premises footprint and use B2B guest accounts. You can:
-- Randomize the passwords of existing local accounts for external users to ensure they can't authenticate locally to on-premises resources. This will increase security by ensuring that authentication and user lifecycle is tied to the external user's home identity.
+- Transition external user local accounts to Azure AD B2B and stop creating local accounts
+ - Invite external users in Azure AD
+- Randomize external user's local-account passwords to prevent authentication to on-premises resources
+ - This action ensures authentication and user lifecycle is connected to the external user home identity
## Next steps
See the following articles on securing external access to resources. We recommen
1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) 1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) 1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
-1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here)
+1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here)
active-directory 7 Secure Access Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md
Title: Manage external access with Azure Active Directory Conditional Access
-description: How to use Azure Active Directory Conditional Access policies to secure external access to resources.
+ Title: Manage external access to resources with Conditional Access
+description: Learn to use Conditional Access policies to secure external access to resources.
Previously updated : 08/26/2022 Last updated : 02/22/2023
-# Manage external access with Conditional Access policies
-[Conditional Access](../conditional-access/overview.md) is the tool Azure AD uses to bring together signals, enforce policies, and determine whether a user should be allowed access to resources. For detailed information on how to create and use Conditional Access policies (Conditional Access policies), see [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md).
+# Manage external access to resources with Conditional Access policies
-![Diagram of Conditional Access signals and decisions](media/secure-external-access//7-conditional-access-signals.png)
+Conditional Access interprets signals, enforces policies, and determines if a user is granted access to resources. In this article, learn about applying Conditional Access policies to external users. The article assumes you might not have access to entitlement management, a feature you can use with Conditional Access.
-This article discusses applying Conditional Access policies to external users and assumes you donΓÇÖt have access to [Entitlement Management](../governance/entitlement-management-overview.md) functionality. Conditional Access policies can be and are used alongside Entitlement Management.
+Learn more:
-Earlier in this document set, you [created a security plan](3-secure-access-plan.md) that outlined:
+* [What is Conditional Access?](../conditional-access/overview.md)
+* [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md)
+* [What is entitlement management?](../governance/entitlement-management-overview.md)
-* Applications and resources have the same security requirements and can be grouped for access.
-* Sign-in requirements for external users.
+The following diagram illustrates signals to Conditional Access that trigger access processes.
-YouΓÇÖll use that plan to create your Conditional Access policies for external access.
+ ![Diagram of Conditional Access signal input and resulting access processes.](media/secure-external-access//7-conditional-access-signals.png)
+
+## Align a security plan with Conditional Access policies
+
+In the third article, in the set of 10 articles, there's guidance on creating a security plan. Use that plan to help create Conditional Access policies for external access. Part of the security plan includes:
+
+* Grouped applications and resources for simplified access
+* Sign-in requirements for external users
> [!IMPORTANT]
-> Create several internal and external user test accounts so that you can test the policies you create before applying them.
+> Create internal and external user test accounts to test policies before applying them.
+
+See article three, [Create a security plan for external access to resources](3-secure-access-plan.md)
## Conditional Access policies for external access
-The following are best practices related to governing external access with Conditional Access policies.
+The following sections are best practices for governing external access with Conditional Access policies.
+
+### Entitlement management or groups
+
+If you canΓÇÖt use connected organizations in entitlement management, create an Azure AD security group, or Microsoft 365 Group for partner organizations. Assign users from that partner to the group. You can use the groups in Conditional Access policies.
+
+Learn more:
+
+* [What is entitlement management?](../governance/entitlement-management-overview.md)
+* [Manage Azure Active Directory groups and group membership](how-to-manage-groups.md)
+* [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups?view=o365-worldwide&preserve-view=true)
+
+### Conditional Access policy creation
+
+Create as few Conditional Access policies as possible. For applications that have the same access requirements, add them to the same policy.
+
+Conditional Access policies apply to a maximum of 250 applications. If more than 250 applications have the same access requirement, create duplicate policies. For instance, Policy A applies to apps 1-250, Policy B applies to apps 251-500, etc.
+
+### Naming convention
+
+Use a naming convention that clarifies policy purpose. External access examples are:
+
+* ExternalAccess_actiontaken_AppGroup
+* ExternalAccess_Block_FinanceApps
-* If you canΓÇÖt use connected organizations in Entitlement Management, create an Azure AD security group or Microsoft 365 group for each partner organization you work with. Assign all users from that partner to the group. You may then use those groups in Conditional Access policies.
+## Block external users from resources
-* Create as few Conditional Access policies as possible. For applications that have the same access needs, add them all to the same policy.
+You can block external users from accessing resources with Conditional Access policies.
- > [!NOTE]
- > Conditional Access policies can apply to a maximum of 250 applications. If more than 250 Apps have the same access needs, create duplicate policies. Policy A will apply to apps 1-250, policy B will apply to apps 251-500, etc.
+1. Sign in to the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+3. Select **New policy**.
+4. Enter a policy a name.
+5. Under **Assignments**, select **Users or workload identities**.
+6. Under **Include**, select **All guests and external users**.
+7. Under **Exclude**, select **Users and groups**.
+8. Select emergency access accounts.
+9. Select **Done**.
+10. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+11. Under **Exclude**, select applications you want to exclude.
+12. Under **Access controls** > **Grant**, select **Block access**.
+13. Select **Select**.
+14. Select **Enable policy** to **Report-only**.
+15. Select **Create**.
-* Clearly name policies specific to external access with a naming convention. One naming convention is *ExternalAccess_actiontaken_AppGroup*. For example a policy for external access that blocks access to finance apps, called ExternalAccess_Block_FinanceApps.
+> [!NOTE]
+> You can confirm settings in **report only** mode. See, Configure a Conditional Access policy in repory-only mode, in [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
-## Block all external users from resources
+Learn more: [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md)
-You can block external users from accessing specific sets of resources with Conditional Access policies. Once youΓÇÖve determined the set of resources to which you want to block access, create a policy.
+### Allow external access to specific external users
-To create a policy that blocks access for external users to a set of applications:
+There are scenarios when it's necessary to allow access for a small, specific group.
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
-1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_FinanceApps.
-1. Under **Assignments**, select **Users or workload identities**.
- 1. Under **Include**, select **All guests and external users**.
- 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md).
- 1. Select **Done**.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
- 1. Under **Exclude**, select any applications that shouldnΓÇÖt be blocked.
-1. Under **Access controls** > **Grant**, select **Block access**, and choose **Select**.
-1. Confirm your settings and set **Enable policy** to **Report-only**.
-1. Select **Create** to create to enable your policy.
+Before you begin, we recommend you create a security group, which contains external users who access resources. See, [Quickstart: Create a group with members and view all groups and members in Azure AD](active-directory-groups-view-azure-portal.md).
-After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+1. Sign in to the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+3. Select **New policy**.
+4. Enter a policy name.
+5. Under **Assignments**, select **Users or workload identities**.
+6. Under **Include**, select **All guests and external users**.
+7. Under **Exclude**, select **Users and groups**
+8. Select emergency access accounts.
+9. Select the external users security group.
+10. Select **Done**.
+11. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
+12. Under **Exclude**, select applications you want to exclude.
+13. Under **Access controls** > **Grant**, select **Block access**.
+14. Select **Select**.
+15. Select **Create**.
-### Block external access to all except specific external users
+> [!NOTE]
+> You can confirm settings in **report only** mode. See, Configure a Conditional Access policy in repory-only mode, in [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md).
-There may be times you want to block external users except a specific group. For example, you may want to block all external users except those working for the finance team from the finance applications. To do this [Create a security group](active-directory-groups-create-azure-portal.md) to contain the external users who should access the finance applications:
+Learn more: [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md)
-1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
-1. Select **New policy**.
-1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_AllButFinance.
-1. Under **Assignments**, select **Users or workload identities**.
- 1. Under **Include**, select **All guests and external users**.
- 1. Under **Exclude**, select **Users and groups**,
- 1. Choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md).
- 1. Choose the security group of external users you want to exclude from being blocked from specific applications.
- 1. Select **Done**.
-1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
- 1. Under **Exclude**, select the finance applications that shouldnΓÇÖt be blocked.
-1. Under **Access controls** > **Grant**, select **Block access**, and choose **Select**.
-1. Confirm your settings and set **Enable policy** to **Report-only**.
-1. Select **Create** to create to enable your policy.
+### Service provider access
-After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+Conditional Access policies for external users might interfere with service provider access, for example granular delegated administrate privileges.
-### External partner access
+Learn more: [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction)
-Conditional Access policies that target external users may interfere with service provider access, for example granular delegated admin privileges [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction).
+## Conditional Access templates
-## Implement Conditional Access
+Conditional Access templates are a convenient method to deploy new policies aligned with Microsoft recommendations. These templates provide protection aligned with commonly used policies across various customer types and locations.
-Many common Conditional Access policies are documented. See the article [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md) for other common policies you may want to adapt for external users.
+Learn more: [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md)
## Next steps
active-directory Concept Workload Identity Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md
These differences make workload identities harder to manage and put them at high
> [!IMPORTANT] > Detections are visible only to [Workload Identities Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-identities#office-StandaloneSKU-k3hubfz) customers. Customers without Workload Identities Premium licenses still receive all detections but the reporting of details is limited.
+> [!NOTE]
+> Identity Protection detects risk on single tenant, third party SaaS, and multi-tenant apps. Managed Identities are not currently in scope.
+ ## Prerequisites To make use of workload identity risk, including the new **Risky workload identities** blade and the **Workload identity detections** tab in the **Risk detections** blade in the portal, you must have the following.
active-directory Howto Identity Protection Configure Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-notifications.md
Azure AD Identity Protection sends two types of automated notification emails to
This article provides you with an overview of both notification emails.
-We don't support sending emails to users in group-assigned roles.
+ > [!Note]
+ > **We don't support sending emails to users in group-assigned roles.**
## Users at risk detected email
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Configured trusted [network locations](../conditional-access/location-condition.
### Risk remediation
-Organizations can choose to block access when risk is detected. Blocking sometimes stops legitimate users from doing what they need to. A better solution is to allow self-remediation using Azure AD multifactor authentication (MFA) and secure self-service password reset (SSPR).
+Organizations can choose to block access when risk is detected. Blocking sometimes stops legitimate users from doing what they need to. A better solution is to allow self-remediation using Azure AD multifactor authentication (MFA) and secure password change.
> [!WARNING]
-> Users must register for Azure AD MFA and SSPR before they face a situation requiring remediation. Users not registered are blocked and require administrator intervention.
+> Users must register for Azure AD MFA before they face a situation requiring remediation. For hybrid users that are synced from on-premises to cloud, password writeback must have been enabled on them. Users not registered are blocked and require administrator intervention.
>
-> Password change (I know my password and want to change it to something new) outside of the risky user policy remediation flow does not meet the requirement for secure password reset.
+> Password change (I know my password and want to change it to something new) outside of the risky user policy remediation flow does not meet the requirement for secure password change.
### Microsoft's recommendation Microsoft recommends the below risk policy configurations to protect your organization: - User risk policy
- - Require a secure password reset when user risk level is **High**. Azure AD MFA is required before the user can create a new password with SSPR to remediate their risk.
+ - Require a secure password change when user risk level is **High**. Azure AD MFA is required before the user can create a new password with password writeback to remediate their risk.
- Sign-in risk policy - Require Azure AD MFA when sign-in risk level is **Medium** or **High**, allowing users to prove it's them by using one of their registered authentication methods, remediating the sign-in risk.
-Requiring access control when risk level is low will introduce more user interrupts. Choosing to block access rather than allowing self-remediation options, like secure password reset and multifactor authentication, will impact your users and administrators. Weigh these choices when configuring your policies.
+Requiring access control when risk level is low will introduce more user interrupts. Choosing to block access rather than allowing self-remediation options, like secure password change and multifactor authentication, will impact your users and administrators. Weigh these choices when configuring your policies.
## Exclusions
active-directory Application Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-properties.md
The homepage URL can't be edited within enterprise applications. The homepage UR
This is the application logo that users see on the My Apps portal and the Office 365 application launcher. Administrators also see the logo in the Azure AD gallery.
-Custom logos must be exactly 215x215 pixels in size and be in the PNG format. You should use a solid color background with no transparency in your application logo. The central image dimensions should be 94x94 pixels and the logo file size can't be over 100 KB.
+Custom logos must be exactly 215x215 pixels in size and be in the PNG format. You should use a solid color background with no transparency in your application logo. The logo file size can't be over 100 KB.
## Application ID
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
To enable the admin consent workflow and choose reviewers:
1. Search for and select **Azure Active Directory**. 1. Select **Enterprise applications**. 1. Under **Security**, select **Consent and permissions**.
-1. Under **Manage**, select **Admin consent settings**.
-Under **Admin consent requests**, select **Yes** for **Users can request admin consent to apps they are unable to consent to** .
+1. Under **Manage**, select **Admin consent settings**. Under **Admin consent requests**, select **Yes** for **Users can request admin consent to apps they are unable to consent to** .
![Screenshot of configure admin consent workflow settings.](./media/configure-admin-consent-workflow/enable-admin-consent-workflow.png)
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
When granting tenant-wide admin consent using either method described above, a w
The tenant-wide admin consent URL follows the following format: ```http
-https://login.microsoftonline.com/{tenant-id}/adminconsent?client_id={client-id}
+https://login.microsoftonline.com/{organization}/adminconsent?client_id={client-id}
``` where: - `{client-id}` is the application's client ID (also known as app ID).-- `{tenant-id}` is your organization's tenant ID or any verified domain name.
+- `{organization}` is the tenant ID or any verified domain name of the tenant you want to consent the application in. You can use the value `common`, which will cause the consent to happen in the home tenant of the user you sign in with.
As always, carefully review the permissions an application requests before granting consent.
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
$spOAuth2PermissionsGrants | ForEach-Object {
} # Get all application permissions for the service principal
-$spApplicationPermissions = Get-AzureADServicePrincipalAppRoleAssignedTo -ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
+$spApplicationPermissions = Get-AzureADServiceAppRoleAssignedTo-ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
# Remove all application permissions $spApplicationPermissions | ForEach-Object {
active-directory Atlassian Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Atlassian Cloud to support provisioning with Azure AD 1. Navigate to [Atlassian Admin Console](http://admin.atlassian.com/). Select your organization if you have more than one.
-1. Select **Settings > User provisioning**.
- ![Screenshot showing the User Provisioning tab.](media/atlassian-cloud-provisioning-tutorial/atlassian-select-settings.png)
-1. Select **Create a directory**.
-1. Enter a name to identify the user directory, for example Azure AD users, then select **Create**.
- ![Screenshot showing the Create directory page.](media/atlassian-cloud-provisioning-tutorial/atlassian-create-directory.png)
-1. Copy the values for **Directory base URL** and **API key**. You'll need those for your identity provider configuration later.
-
+1. Select **Security > Identity providers**.
+1. Select your Identity provider directory.
+1. Select **Set up user provisioning**.
+1. Copy the values for **SCIM base URL** and **API key**. You'll need them when you configure Azure.
+1. Save your **SCIM configuration**.
> [!NOTE] > Make sure you store these values in a safe place, as we won't show them to you again.-
- ![Screenshot showing the API key page.](media/atlassian-cloud-provisioning-tutorial/atlassian-apikey.png)
-
+
Users and groups will automatically be provisioned to your organization. See the [user provisioning](https://support.atlassian.com/provisioning-users/docs/understand-user-provisioning) page for more details on how your users and groups sync to your organization. ## Step 3. Add Atlassian Cloud from the Azure AD application gallery
active-directory Atmos Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atmos-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Atmos to support provisioning with Azure AD
-1. Log in to the [Management Console](https://auth.axissecurity.com/).
+1. Log in to the Axis Management Console.
1. Navigate to **Settings**-> **Identity Providers** screen. 1. Hover over the **Azure Identity Provider** and select **edit**. 1. Navigate to **Advanced Settings**.
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add Atmos from the Azure AD application gallery
-Add Atmos from the Azure AD application gallery to start managing provisioning to Atmos. If you have previously setup Atmos for SSO, you can use the same application. However the recommendation is to create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add Atmos from the Azure AD application gallery to start managing provisioning to Atmos. If you have previously setup Atmos for SSO, you can use the same application. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
active-directory Hawkeyebsb Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hawkeyebsb-tutorial.md
+
+ Title: Azure Active Directory SSO integration with HawkeyeBSB
+description: Learn how to configure single sign-on between Azure Active Directory and HawkeyeBSB.
++++++++ Last updated : 02/23/2023++++
+# Azure Active Directory SSO integration with HawkeyeBSB
+
+In this article, you'll learn how to integrate HawkeyeBSB with Azure Active Directory (Azure AD). HawkeyeBSB was developed by Redbridge Debt & Treasury Advisory to help Clients manage their bank fees. When you integrate HawkeyeBSB with Azure AD, you can:
+
+* Control in Azure AD who has access to HawkeyeBSB.
+* Enable your users to be automatically signed-in to HawkeyeBSB with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for HawkeyeBSB in a test environment. HawkeyeBSB supports both **SP** and **IDP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with HawkeyeBSB, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* HawkeyeBSB single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the HawkeyeBSB application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add HawkeyeBSB from the Azure AD gallery
+
+Add HawkeyeBSB from the Azure AD application gallery to configure single sign-on with HawkeyeBSB. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **HawkeyeBSB** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://hawkeye.redbridgeanalytics.com/sso/saml/metadata/<uniqueSlugPerCustomer>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://hawkeye.redbridgeanalytics.com/sso/saml/acs/<uniqueSlugPerCustomer>`
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://hawkeye.redbridgeanalytics.com/sso/saml/login/<uniqueSlugPerCustomer>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [HawkeyeBSB Client support team](mailto:casemanagement@redbridgedta.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up HawkeyeBSB** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure HawkeyeBSB SSO
+
+To configure single sign-on on **HawkeyeBSB** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [HawkeyeBSB support team](mailto:casemanagement@redbridgedta.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create HawkeyeBSB test user
+
+In this section, you create a user called Britta Simon at HawkeyeBSB. Work with [HawkeyeBSB support team](mailto:casemanagement@redbridgedta.com) to add the users in the HawkeyeBSB platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+1. Click on **Test this application** in Azure portal. This will redirect to HawkeyeBSB Sign-on URL where you can initiate the login flow.
+
+1. Go to HawkeyeBSB Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+1. Click on **Test this application** in Azure portal and you should be automatically signed in to the HawkeyeBSB for which you set up the SSO.
+
+1. You can also use Microsoft My Apps to test the application in any mode. When you click the HawkeyeBSB tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the HawkeyeBSB for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure HawkeyeBSB you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Introdus Pre And Onboarding Platform Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* An introdus subscription, that includes Single Sign-On (SSO)
-* A valid introdus API Token. A guide on how to generate Token, can be found [here](https://api.introdus.dk/docs/#api-OpenAPI).
+* An introdus subscription, that includes single sign-on (SSO)
+* A valid introdus API Token.
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
A subscription that allows SSO. No other configuration is necessary on introdus
## Step 3. Add introDus Pre and Onboarding Platform from the Azure AD application gallery
-Add introDus Pre and Onboarding Platform from the Azure AD application gallery to start managing provisioning to introDus Pre and Onboarding Platform. If you have previously setup introDus Pre and Onboarding Platform for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add introDus Pre and Onboarding Platform from the Azure AD application gallery to start managing provisioning to introDus Pre and Onboarding Platform. If you have previously setup introDus Pre and Onboarding Platform for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to introDus Pre and Onboarding Platform**.
-9. Review the user attributes that are synchronized from Azure AD to introDus Pre and Onboarding Platform in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in introDus Pre and Onboarding Platform for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the introDus Pre and Onboarding Platform API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+9. Review the user attributes that are synchronized from Azure AD to introDus Pre and Onboarding Platform in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in introDus Pre and Onboarding Platform for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the introDus Pre and Onboarding Platform API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type|Supported for filtering| ||||
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning Scope](common/provisioning-scope.png)
-13. When you are ready to provision, click **Save**.
+13. When you're ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
active-directory Parallels Desktop Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parallels-desktop-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Parallels Desktop
+description: Learn how to configure single sign-on between Azure Active Directory and Parallels Desktop.
++++++++ Last updated : 02/23/2023++++
+# Azure Active Directory SSO integration with Parallels Desktop
+
+In this article, you'll learn how to integrate Parallels Desktop with Azure Active Directory (Azure AD). SSO/SAML authentication for employees to use Parallels Desktop. Enable your employees to sign in and activate Parallels Desktop with a corporate account. When you integrate Parallels Desktop with Azure AD, you can:
+
+* Control in Azure AD who has access to Parallels Desktop.
+* Enable your users to be automatically signed-in to Parallels Desktop with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Parallels Desktop in a test environment. Parallels Desktop supports only **SP** initiated single sign-on.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Parallels Desktop, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Parallels Desktop single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Parallels Desktop application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Parallels Desktop from the Azure AD gallery
+
+Add Parallels Desktop from the Azure AD application gallery to configure single sign-on with Parallels Desktop. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Parallels Desktop** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using the following pattern:
+ `https://account.parallels.com/<ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://account.parallels.com/webapp/sso/acs/<ID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Please note the Identifier and Reply URL values are customer specific and should be able to specify it manually by copying it from Parallels My Account to the identity provider Azure. Contact [Parallels Desktop Client support team](mailto:parallels.desktop.sso@alludo.com) for any help. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+ c. In the **Sign on URL** textbox, type the URL:-
+ `https://my.parallels.com/login?sso=1`
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot of the Certificate download link.](common/certificate-base64-download.png)
+
+1. On the **Set up Parallels Desktop** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Parallels Desktop SSO
+
+To configure single sign-on on **Parallels Desktop** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Parallels Desktop support team](mailto:parallels.desktop.sso@alludo.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Parallels Desktop test user
+
+In this section, you create a user called Britta Simon at Parallels Desktop. Work with [Parallels Desktop support team](mailto:parallels.desktop.sso@alludo.com) to add the users in the Parallels Desktop platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Parallels Desktop Sign-on URL where you can initiate the login flow.
+
+* Go to Parallels Desktop Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Parallels Desktop tile in the My Apps, this will redirect to Parallels Desktop Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Parallels Desktop you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
aks Aks Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-diagnostics.md
Title: Azure Kubernetes Service (AKS) Diagnostics Overview description: Learn about self-diagnosing clusters in Azure Kubernetes Service.- Last updated 11/15/2022
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md
Title: Migrate to Azure Kubernetes Service (AKS) description: Migrate to Azure Kubernetes Service (AKS).- Last updated 03/25/2021
aks Aks Planned Maintenance Weekly Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-planned-maintenance-weekly-releases.md
Title: Use Planned Maintenance for your Azure Kubernetes Service (AKS) cluster weekly releases (preview) description: Learn how to use Planned Maintenance in Azure Kubernetes Service (AKS) for cluster weekly releases- Last updated 09/16/2021
aks Aks Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-support-help.md
Title: Azure Kubernetes Service support and help options description: How to obtain help and support for questions or problems when you create solutions using Azure Kubernetes Service. - Last updated 10/18/2022
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-authorized-ip-ranges.md
Title: API server authorized IP ranges in Azure Kubernetes Service (AKS) description: Learn how to secure your cluster using an IP address range for access to the API server in Azure Kubernetes Service (AKS)- Last updated 11/04/2022
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
Title: API Server VNet Integration in Azure Kubernetes Service (AKS)
description: Learn how to create an Azure Kubernetes Service (AKS) cluster with API Server VNet Integration - Last updated 09/09/2022
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
Title: Automatically upgrade an Azure Kubernetes Service (AKS) cluster description: Learn how to automatically upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.-
aks Auto Upgrade Node Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md
Title: Automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images description: Learn how to automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images.- Last updated 02/03/2023
-# Automatically upgrade Azure Kubernetes Service cluster node operating system images
+# Automatically upgrade Azure Kubernetes Service cluster node operating system images (preview)
AKS supports upgrading the images on a node so your cluster is up to date with the newest operating system (OS) and runtime updates. AKS regularly provides new node OS images with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest AKS features and to maintain security. Before learning about auto-upgrade, make sure you understand upgrade fundamentals by reading [Upgrade an AKS cluster][upgrade-aks-cluster]. The latest AKS node image information can be found by visiting the [AKS release tracker][release-tracker]. + ## Why use node OS auto-upgrade Node OS auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS.
The following upgrade channels are available:
| `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates|N/A| | `Unmanaged`|OS updates will be applied automatically through the OS built-in patching infrastructure. Newly allocated machines will be unpatched initially and will be patched at some point by the OS's infrastructure|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows and Mariner don't apply security patches automatically, so this option behaves equivalently to `None`| | `SecurityPatch`|AKS will update the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. Where possible, patches will also be applied without disruption to existing nodes. Some patches, such as kernel patches, can't be applied to existing nodes without disruption. For such patches, the VHD will be updated and existing machines will be upgraded to that VHD following maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group.|N/A|
-| `NodeImage`|AKS will update the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades] will be disabled by default.|
+| `NodeImage`|AKS will update the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.|
To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example.
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
Title: Use availability zones in Azure Kubernetes Service (AKS) description: Learn how to create a cluster that distributes nodes across availability zones in Azure Kubernetes Service (AKS)- Last updated 02/22/2023
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
Title: Integrate Azure Active Directory with Azure Kubernetes Service (legacy) description: Learn how to use the Azure CLI to create and Azure Active Directory-enabled Azure Kubernetes Service (AKS) cluster (legacy)- Last updated 11/11/2021
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. -
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
Title: Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (
description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Azure CNI Powered by Cilium. -
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Service (AKS) description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks.- Last updated 07/18/2022
aks Azure Hpc Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hpc-cache.md
Title: Integrate Azure HPC Cache with Azure Kubernetes Service description: Learn how to integrate HPC Cache with Azure Kubernetes Service-
aks Azure Nfs Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-nfs-volume.md
Title: Manually create a Linux NFS Server persistent volume for Azure Kubernetes Service description: Learn how to manually create an Ubuntu Linux NFS Server persistent volume for use with pods in Azure Kubernetes Service (AKS)- Last updated 06/13/2022
aks Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices.md
Title: Best practices for Azure Kubernetes Service (AKS) description: Collection of the cluster operator and developer best practices to build and manage applications in Azure Kubernetes Service (AKS)- Last updated 03/09/2021
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Title: Certificate Rotation in Azure Kubernetes Service (AKS) description: Learn certificate rotation in an Azure Kubernetes Service (AKS) cluster.- Last updated 01/19/2023
aks Cis Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-kubernetes.md
Title: Center for Internet Security (CIS) Kubernetes benchmark description: Learn how AKS applies the CIS Kubernetes benchmark- Last updated 12/20/2022
aks Cis Ubuntu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-ubuntu.md
Title: Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Internet Security (CIS) benchmark description: Learn how AKS applies the CIS benchmark- Last updated 04/20/2022
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
Title: Use the cluster autoscaler in Azure Kubernetes Service (AKS) description: Learn how to use the cluster autoscaler to automatically scale your cluster to meet application demands in an Azure Kubernetes Service (AKS) cluster.- Last updated 10/03/2022
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
Title: Integrate Azure Container Registry with Azure Kubernetes Service description: Learn how to integrate Azure Kubernetes Service (AKS) with Azure Container Registry (ACR)- Last updated 11/16/2022 ms.tool: azure-cli, azure-powershell
aks Cluster Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md
Title: Cluster extensions for Azure Kubernetes Service (AKS) description: Learn how to deploy and manage the lifecycle of extensions on Azure Kubernetes Service (AKS)- Last updated 09/29/2022
aks Command Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/command-invoke.md
Title: Use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster description: Learn how to use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster- Last updated 1/14/2022
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Title: Concepts - Kubernetes basics for Azure Kubernetes Services (AKS) description: Learn the basic cluster and workload components of Kubernetes and how they relate to features in Azure Kubernetes Service (AKS)- Last updated 10/31/2022
aks Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-identity.md
Title: Concepts - Access and identity in Azure Kubernetes Services (AKS) description: Learn about access and identity in Azure Kubernetes Service (AKS), including Azure Active Directory integration, Kubernetes role-based access control (Kubernetes RBAC), and roles and bindings.- Last updated 09/27/2022
aks Concepts Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-scale.md
Title: Concepts - Scale applications in Azure Kubernetes Services (AKS) description: Learn about scaling in Azure Kubernetes Service (AKS), including horizontal pod autoscaler, cluster autoscaler, and the Azure Container Instances connector.- Last updated 02/28/2019
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
Title: Concepts - Security in Azure Kubernetes Services (AKS) description: Learn about security in Azure Kubernetes Service (AKS), including master and node communication, network policies, and Kubernetes secrets.- Last updated 02/22/2023
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Title: Concepts - Sustainable software engineering in Azure Kubernetes Services (AKS) description: Learn about sustainable software engineering in Azure Kubernetes Service (AKS).- Last updated 10/25/2022
aks Configure Azure Cni Dynamic Ip Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md
Title: Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS) description: Learn how to configure Azure CNI (advanced) networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)- Last updated 01/09/2023
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
description: Learn how to configure Azure CNI (advanced) networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. - Last updated 05/16/2022
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
Title: Configure kube-proxy (iptables/IPVS) (preview) description: Learn how to configure kube-proxy to utilize different load balancing configurations with Azure Kubernetes Service (AKS).- Last updated 10/25/2022
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
description: Learn how to configure dual-stack kubenet networking in Azure Kubernetes Service (AKS) - Last updated 12/15/2021
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
description: Learn how to configure kubenet (basic) network in Azure Kubernetes Service (AKS) to deploy an AKS cluster into an existing virtual network and subnet. - Last updated 10/26/2022
aks Control Kubeconfig Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-kubeconfig-access.md
Title: Limit access to kubeconfig in Azure Kubernetes Service (AKS) description: Learn how to control access to the Kubernetes configuration file (kubeconfig) for cluster administrators and cluster users- Last updated 05/06/2020
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
Title: Customize CoreDNS for Azure Kubernetes Service (AKS) description: Learn how to customize CoreDNS to add subdomains or extend custom DNS endpoints using Azure Kubernetes Service (AKS)-
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
Title: Use the Azure Key Vault Provider for Secrets Store CSI Driver for Azure K
description: Learn how to use the Azure Key Vault Provider for Secrets Store CSI Driver to integrate secrets stores with Azure Kubernetes Service (AKS). - Last updated 02/10/2023
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
Title: Provide an access identity to the Azure Key Vault Provider for Secrets St
description: Learn about the various methods that you can use to allow the Azure Key Vault Provider for Secrets Store CSI Driver to integrate with your Azure key vault. - Last updated 01/31/2023
aks Csi Secrets Store Nginx Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-nginx-tls.md
Title: Set up Secrets Store CSI Driver to enable NGINX Ingress Controller with T
description: How to configure Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS for Azure Kubernetes Service (AKS). - Last updated 05/26/2022
aks Custom Certificate Authority https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md
Title: Custom certificate authority (CA) in Azure Kubernetes Service (AKS) (preview) description: Learn how to use a custom certificate authority (CA) in an Azure Kubernetes Service (AKS) cluster.-
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
Title: Customize the node configuration for Azure Kubernetes Service (AKS) node pools description: Learn how to customize the configuration on Azure Kubernetes Service (AKS) cluster nodes and node pools.- Last updated 12/03/2020
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
internal-app LoadBalancer 10.1.15.188 10.0.0.35 80:31669/TCP 1m
> [!NOTE] >
-> You may need to give the *Network Contributor* role to the resource group in which your Azure virtual network resources are deployed. You can view the cluster identity with [az aks show][az-aks-show], such as `az aks show --resource-group myResourceGroup --name myAKSCluster --query "identity"`. To create a role assignment, use the [az role assignment create][az-role-assignment-create] command.
+> You may need to assign a minimum of *Microsoft.Network/virtualNetworks/subnets/read* and *Microsoft.Network/virtualNetworks/subnets/join/action* permission to AKS MSI on the Azure Virtual Network resources. You can view the cluster identity with [az aks show][az-aks-show], such as `az aks show --resource-group myResourceGroup --name myAKSCluster --query "identity"`. To create a role assignment, use the [az role assignment create][az-role-assignment-create] command.
-## Specify a different subnet
+### Specify a different subnet
Add the *azure-load-balancer-internal-subnet* annotation to your service to specify a subnet for your load balancer. The subnet specified must be in the same virtual network as your AKS cluster. When deployed, the load balancer *EXTERNAL-IP* address is part of the specified subnet.
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
For more information about the latest images provided by AKS, see the [AKS relea
For information on upgrading the Kubernetes version for your cluster, see [Upgrade an AKS cluster][upgrade-cluster].
+Node image upgrades can also be performed automatically, and scheduled by using planned maintenance. For more details, see [Automatically upgrade node images][auto-upgrade-node-image].
+ > [!NOTE] > The AKS cluster must use virtual machine scale sets for the nodes.
az aks nodepool show \
[max-surge]: upgrade-cluster.md#customize-node-surge-upgrade [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update
+[auto-upgrade-node-image]: auto-upgrade-node-image.md
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
This process is better than updating Linux-based kernels manually because Linux
This article shows you how you can automate the update process of AKS nodes. You'll use GitHub Actions and Azure CLI to create an update task based on `cron` that runs automatically.
+Node image upgrades can also be performed automatically, and scheduled by using planned maintenance. For more details, see [Automatically upgrade node images][auto-upgrade-node-image].
+ ## Before you begin This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
jobs:
[system-pools]: use-system-pools.md [spot-pools]: spot-node-pool.md [use-multiple-node-pools]: use-multiple-node-pools.md
+[auto-upgrade-node-image]: auto-upgrade-node-image.md
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
To check the expiration date of your service principal, use the [az ad sp creden
```azurecli SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \ --query servicePrincipalProfile.clientId -o tsv)
-az ad sp credential list --id "$SP_ID" --query "[].endDateTime" -o tsv
+az ad app credential list --id "$SP_ID" --query "[].endDateTime" -o tsv
``` ### Reset the existing service principal credential
SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \
With a variable set that contains the service principal ID, now reset the credentials using [az ad sp credential reset][az-ad-sp-credential-reset]. The following example lets the Azure platform generate a new secure secret for the service principal. This new secure secret is also stored as a variable. ```azurecli-interactive
-SP_SECRET=$(az ad sp credential reset --id "$SP_ID" --query password -o tsv)
+SP_SECRET=$(az ad app credential reset --id "$SP_ID" --query password -o tsv)
``` Now continue on to [update AKS cluster with new service principal credentials](#update-aks-cluster-with-new-service-principal-credentials). This step is necessary for the Service Principal changes to reflect on the AKS cluster.
aks Use Mariner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md
Mariner is available for use in the same regions as AKS.
Mariner currently has the following limitations:
-* Mariner doesn't yet have image SKUs for GPU, ARM64, SGX, or FIPS.
-* Mariner doesn't yet have FedRAMP, FIPS, or CIS certification.
+* Image SKUs for SGX and FIPS are not available.
+* It doesn't meet the [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final) compliance requirements and [Center for Internet Security (CIS)](https://www.cisecurity.org/) certification.
* Mariner can't yet be deployed through the Azure portal. * Qualys, Trivy, and Microsoft Defender for Containers are the only vulnerability scanning tools that support Mariner today.
-* The Mariner container host is a Gen 2 image. Mariner doesn't plan to offer a Gen 1 SKU.
-* Node configurations aren't yet supported.
-* Mariner isn't yet supported in GitHub actions.
* Mariner doesn't support AppArmor. Support for SELinux can be manually configured. * Some addons, extensions, and open-source integrations may not be supported yet on Mariner. Azure Monitor, Grafana, Helm, Key Vault, and Container Insights are supported.
analysis-services Analysis Services Addservprinc Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-addservprinc-admins.md
Title: Learn how to add a service principal to Azure Analysis Services admin role | Microsoft Docs description: Learn how to add an automation service principal to the Azure Analysis Services server admin role -+ Last updated 01/24/2023
analysis-services Analysis Services Async Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-async-refresh.md
Title: Learn about asynchronous refresh for Azure Analysis Services models | Microsoft Docs description: Describes how to use the Azure Analysis Services REST API to code asynchronous refresh of model data. -+ Last updated 02/02/2022
analysis-services Analysis Services Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-backup.md
Title: Learn about Azure Analysis Services database backup and restore | Microsoft Docs description: This article describes how to backup and restore model metadata and data from an Azure Analysis Services database. -+ Last updated 01/24/2023
analysis-services Analysis Services Bcdr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-bcdr.md
Title: Learn about Azure Analysis Services high availability | Microsoft Docs description: This article describes how Azure Analysis Services provides high availability during service disruption. -+ Last updated 01/24/2023
analysis-services Analysis Services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-capacity-limits.md
Title: Learn about Azure Analysis Services resource and object limits | Microsoft Docs description: This article describes resource and object limits for an Azure Analysis Services server. -+ Last updated 01/24/2023
analysis-services Analysis Services Connect Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-excel.md
Title: Learn how to connect to Azure Analysis Services with Excel | Microsoft Docs description: Learn how to connect to an Azure Analysis Services server by using Excel. Once connected, users can create PivotTables to explore data. -+ Last updated 01/24/2023
analysis-services Analysis Services Connect Pbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-pbi.md
Title: Learn how to connect to Azure Analysis Services with Power BI | Microsoft Docs description: Learn how to connect to an Azure Analysis Services server by using Power BI. Once connected, users can explore model data. -+ Last updated 01/24/2023
analysis-services Analysis Services Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect.md
Title: Learn about connecting to Azure Analysis Services servers| Microsoft Docs description: Learn how to connect to and get data from an Analysis Services server in Azure. -+ Last updated 01/24/2023
analysis-services Analysis Services Create Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md
Title: Quickstart - Create an Azure Analysis Services server resource by using B
description: Quickstart showing how to an Azure Analysis Services server resource by using a Bicep file. Last updated 03/08/2022 -+ tags: azure-resource-manager, bicep
analysis-services Analysis Services Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-powershell.md
Last updated 01/26/2023 -+ #Customer intent: As a BI developer, I want to create an Azure Analysis Services server by using PowerShell.
analysis-services Analysis Services Create Sample Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-sample-model.md
Title: Tutorial - Add a sample model- Azure Analysis Services | Microsoft Docs description: In this tutorial, learn how to add a sample model in Azure Analysis Services. -+ Last updated 01/26/2023
analysis-services Analysis Services Create Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-server.md
Last updated 01/26/2023 -+ #Customer intent: As a BI developer, I want to create an Azure Analysis Services server by using the Azure portal.
analysis-services Analysis Services Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-template.md
Last updated 01/26/2023 -+ tags: azure-resource-manager #Customer intent: As a BI developer who is new to Azure, I want to use Azure Analysis Services to store and manage my organizations data models.
analysis-services Analysis Services Database Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-database-users.md
Title: Learn how to manage database roles and users in Azure Analysis Services | Microsoft Docs description: Learn how to manage database roles and users on an Analysis Services server in Azure. -+ Last updated 01/27/2023
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-datasource.md
Title: Learn about data sources supported in Azure Analysis Services | Microsoft Docs description: Describes data sources and connectors supported for tabular 1200 and higher data models in Azure Analysis Services. -+ Last updated 01/27/2023
analysis-services Analysis Services Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-deploy.md
Title: Learn how to deploy a model to Azure Analysis Services by using Visual Studio | Microsoft Docs description: Learn how to deploy a tabular model to an Azure Analysis Services server by using Visual Studio. -+ Last updated 01/27/2023
analysis-services Analysis Services Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway-install.md
Title: Learn how to install On-premises data gateway for Azure Analysis Services | Microsoft Docs description: Learn how to install and configure an On-premises data gateway to connect to on-premises data sources from an Azure Analysis Services server. -+ Last updated 01/27/2023
analysis-services Analysis Services Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway.md
Title: Learn about the On-premises data gateway for Azure Analysis Services | Microsoft Docs description: An On-premises gateway is necessary if your Analysis Services server in Azure will connect to on-premises data sources. -+ Last updated 01/27/2023
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
Title: Learn about diagnostic logging for Azure Analysis Services | Microsoft Docs description: Describes how to setup up logging to monitoring your Azure Analysis Services server. -+ Last updated 01/27/2023
analysis-services Analysis Services Long Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-long-operations.md
Title: Learn about best practices for long running operations in Azure Analysis Services | Microsoft Docs description: This article describes best practices for long running operations. -+ Last updated 01/27/2023
analysis-services Analysis Services Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage-users.md
Title: Azure Analysis Services authentication and user permissions| Microsoft Docs description: This article describes how Azure Analysis Services uses Azure Active Directory (Azure AD) for identity management and user authentication. -+ Last updated 02/02/2022
analysis-services Analysis Services Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage.md
Title: Manage Azure Analysis Services | Microsoft Docs description: This article describes the tools used to manage administration and management tasks for an Azure Analysis Services server. -+ Last updated 02/02/2022
analysis-services Analysis Services Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-monitor.md
Title: Monitor Azure Analysis Services server metrics | Microsoft Docs description: Learn how Analysis Services use Azure Metrics Explorer, a free tool in the portal, to help you monitor the performance and health of your servers. -+ Last updated 03/04/2020
analysis-services Analysis Services Odc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-odc.md
Title: Connect to Azure Analysis Services with an .odc file | Microsoft Docs description: Learn how to create an Office Data Connection file to connect to and get data from an Analysis Services server in Azure. -+ Last updated 04/27/2021
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Title: What is Azure Analysis Services? description: Learn about Azure Analysis Services, a fully managed platform as a service (PaaS) that provides enterprise-grade data models in the cloud. -+ Last updated 02/15/2022
analysis-services Analysis Services Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-powershell.md
Title: Manage Azure Analysis Services with PowerShell | Microsoft Docs description: Describes Azure Analysis Services PowerShell cmdlets for common administrative tasks such as creating servers, suspending operations, or changing service level. -+ Last updated 04/27/2021
analysis-services Analysis Services Qs Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-qs-firewall.md
Last updated 08/12/2020 -+ #Customer intent: As a BI developer, I want to secure my server by configuring a server firewall and create open IP address ranges for client computers in my organization.
analysis-services Analysis Services Refresh Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-azure-automation.md
Title: Refresh Azure Analysis Services models with Azure Automation | Microsoft Docs description: This article describes how to code model refreshes for Azure Analysis Services by using Azure Automation. -+ Last updated 12/01/2020
analysis-services Analysis Services Refresh Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-logic-app.md
Title: Refresh with Logic Apps for Azure Analysis Services models | Microsoft Docs description: This article describes how to code asynchronous refresh for Azure Analysis Services by using Azure Logic Apps. -+ Last updated 10/30/2019
analysis-services Analysis Services Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-samples.md
Title: Azure Analysis Services code, project, and database samples description: This article describes resources to learn about code, project, and database samples for Azure Analysis Services. -+ Last updated 04/27/2021
analysis-services Analysis Services Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-scale-out.md
Title: Azure Analysis Services scale-out| Microsoft Docs description: Replicate Azure Analysis Services servers with scale-out. Client queries can then be distributed among multiple query replicas in a scale-out query pool. -+ Last updated 04/27/2021
analysis-services Analysis Services Server Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-admins.md
Title: Manage server admins in Azure Analysis Services | Microsoft Docs description: This article describes how to manage server administrators for an Azure Analysis Services server by using the Azure portal, PowerShell, or REST APIs. -+ Last updated 02/02/2022
analysis-services Analysis Services Server Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-alias.md
Title: Azure Analysis Services alias server names | Microsoft Docs description: Learn how to create Azure Analysis Services server name aliases. Users can then connect to your server with a shorter alias name instead of the server name. -+ Last updated 12/07/2021
analysis-services Analysis Services Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-service-principal.md
Title: Automate Azure Analysis Services tasks with service principals | Microsoft Docs description: Learn how to create a service principal for automating Azure Analysis Services administrative tasks. -+ Last updated 02/02/2022
analysis-services Analysis Services Vnet Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-vnet-gateway.md
Title: Configure Azure Analysis Services for VNet data sources | Microsoft Docs description: Learn how to configure an Azure Analysis Services server to use a gateway for data sources on Azure Virtual Network (VNet). -+ Last updated 02/02/2022
analysis-services Move Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/move-between-regions.md
Title: Move Azure Analysis Services to a different region | Microsoft Docs description: Describes how to move an Azure Analysis Services resource to a different region. -+ Last updated 12/01/2020
analysis-services Analysis Services Tutorial Pbid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/tutorials/analysis-services-tutorial-pbid.md
Title: Tutorial - Connect Azure Analysis Services with Power BI Desktop | Microsoft Docs description: In this tutorial, learn how to get an Analysis Services server name from the Azure portal and then connect to the server by using Power BI Desktop.-+ Last updated 02/02/2022
analysis-services Analysis Services Tutorial Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/tutorials/analysis-services-tutorial-roles.md
Title: Tutorial - Configure Azure Analysis Services roles | Microsoft Docs description: In this tutorial, learn how to configure Azure Analysis Services administrator and user roles by using the Azure portal or SQL Server Management Studio. -+ Last updated 10/12/2021
app-service Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ftp.md
Check that you've entered the correct [hostname](#get-ftps-endpoint) and [creden
#### How can I connect to FTP in Azure App Service via passive mode? Azure App Service supports connecting via both Active and Passive mode. Passive mode is preferred because your deployment machines are usually behind a firewall (in the operating system or as part of a home or business network). See an [example from the WinSCP documentation](https://winscp.net/docs/ui_login_connection).
+### How can I determine the method that was used to deploy my Azure App Service?
+Let us say you take over owning an app and you wish to find out how the Azure App Service was deployed so you can make changes and deploy them. You can determine how an Azure App Service was deployed by checking the application settings. If the app was deployed using an external package URL, you will see the WEBSITE_RUN_FROM_PACKAGE setting in the application settings with a URL value. Or if it was deployed using zip deploy, you will see the WEBSITE_RUN_FROM_PACKAGE setting with a value of 1. If the app was deployed using Azure DevOps, you will see the deployment history in the Azure DevOps portal. If Azure Functions Core Tools was used, you will see the deployment history in the Azure portal.
+ ## More resources * [Local Git deployment to Azure App Service](deploy-local-git.md)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 02/16/2023 Last updated : 02/22/2023
At this time, the migration feature doesn't support migrations to App Service En
### Azure Government: - US DoD Central-- US Gov Arizona ### Azure China:
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 02/15/2023 Last updated : 02/22/2023
App Service Environment v3 is available in the following regions:
| US Gov Arizona | ✅ | | ✅ | | US Gov Iowa | | | ✅ | | US Gov Texas | ✅ | | ✅ |
-| US Gov Virginia | ✅ | | ✅ |
+| US Gov Virginia | ✅ |✅ | ✅ |
### Azure China:
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
Previously updated : 09/13/2022 Last updated : 02/23/2023
Subnet Size /24 = 256 IP addresses - 5 reserved from the platform = 251 availabl
> It is possible to change the subnet of an existing Application Gateway within the same virtual network. You can do this using Azure PowerShell or Azure CLI. For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway) ### Virtual network permission
+Since application gateway resources are deployed within a virtual network, Application Gateway performs a check to verify the permission on the provided virtual network resource. This validation is performed during both creation and management operations.
-Since application gateway resources are deployed within a virtual network resource, Application Gateway performs a check to verify the permission on the provided virtual network resource. This is verified during both create and manage operations.
+You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify the users or service principals that operate application gateways have at least **Microsoft.Network/virtualNetworks/subnets/join/action** permission. Use built-in roles, such as [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor), which already support this permission. If a built-in role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md). Learn more about [managing subnet permissions](../virtual-network/virtual-network-manage-subnet.md#permissions). You may have to allow sufficient time for [Azure Resource Manager cache refresh](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after role assignment changes.
-You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify that users or Service Principals who operate application gateways have at least **Microsoft.Network/virtualNetworks/subnets/join/action** or some higher permission such as the built-in [Network contributor](../role-based-access-control/built-in-roles.md) role on the virtual network. Visit [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md) to know more on subnet permissions.
+#### Identifying affected users or service principals for your subscription
+By visiting Azure Advisor for your account, you can verify if your subscription has any users or service principals with insufficient permission. The details of that recommendation are as follows:
-If a [built-in](../role-based-access-control/built-in-roles.md) role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md) for this purpose. Also, [allow sufficient time](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after you make changes to a role assignments.
+**Title**: Update VNet permission of Application Gateway users </br>
+**Category**: Reliability </br>
+**Impact**: High </br>
+
+#### Using temporary Azure Feature Exposure Control (AFEC) flag
+
+As a temporary extension, we have introduced a subscription-level [Azure Feature Exposure Control (AFEC)](../azure-resource-manager/management/preview-features.md?tabs=azure-portal) that you can register for, until you fix the permissions for all your users and/or service principals. [Set up this flag](../azure-resource-manager/management/preview-features.md?#required-access) for your Azure subscription.
+
+**Name**: Microsoft.Network/DisableApplicationGatewaySubnetPermissionCheck </br>
+**Description**: Disable Application Gateway Subnet Permission Check </br>
+**ProviderNamespace**: Microsoft.Network </br>
+**EnrollmentType**: AutoApprove </br>
> [!NOTE]
-> As a temporary extension, we have introduced a subscription-level [Azure Feature Exposure Control (AFEC)](../azure-resource-manager/management/preview-features.md?tabs=azure-portal) flag to help you fix the permissions for all your users and/or service principals' permissions. Register for this interim feature on your own through a subscription owner, contributor, or custom role. </br>
->
-> "**name**": "Microsoft.Network/DisableApplicationGatewaySubnetPermissionCheck", </br>
-> "**description**": "Disable Application Gateway Subnet Permission Check", </br>
-> "**providerNamespace**": "Microsoft.Network", </br>
-> "**enrollmentType**": "AutoApprove" </br>
->
-> The provision to circumvent the virtual network permission check by using this feature control is **available only for a limited period, until 6th April 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions. [Set up this flag in your Azure subscription](../azure-resource-manager/management/preview-features.md?tabs=azure-portal).
+> The provision to circumvent the virtual network permission check by using this feature control (AFEC) is available only for a limited period, **until 6th April 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions. Set up this flag in your Azure subscription.
## Network security groups
automation Automation Solution Vm Management Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-config.md
Title: Configure Azure Automation Start/Stop VMs during off-hours
description: This article tells how to configure the Start/Stop VMs during off-hours feature to support different use cases or scenarios. Previously updated : 01/04/2023 Last updated : 02/23/2023
# Configure Start/Stop VMs during off-hours > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
This article describes how to configure the [Start/Stop VMs during off-hours](automation-solution-vm-management.md) feature to support the described scenarios. You can also learn how to:
automation Automation Solution Vm Management Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-remove.md
Title: Remove Azure Automation Start/Stop VMs during off-hours overview
description: This article describes how to remove the Start/Stop VMs during off-hours feature and unlink an Automation account from the Log Analytics workspace. Previously updated : 01/04/2023 Last updated : 02/23/2023
# Remove Start/Stop VMs during off-hours from Automation account > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
After you enable the Start/Stop VMs during off-hours feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done using one of the following methods based on the supported deployment models:
automation Overview Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md
Title: Azure Automation Change Tracking and Inventory overview using Azure Monit
description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent (Preview), which helps you identify software and Microsoft service changes in your environment. Previously updated : 12/14/2022 Last updated : 02/23/2023
This article explains on the latest version of change tracking support using Azu
## Key benefits -- **Compatibility with the unified monitoring agent** - Compatible with the [Azure Monitor Agent (Preview)](/azure/azure-monitor/agents/agents-overview) that enhances security, reliability, and facilitates multi-homing experience to store data.
+- **Compatibility with the unified monitoring agent** - Compatible with the [Azure Monitor Agent (Preview)](../../azure-monitor/agents/agents-overview.md) that enhances security, reliability, and facilitates multi-homing experience to store data.
- **Compatibility with tracking tool**- Compatible with the Change tracking (CT) extension deployed through the Azure Policy on the client's virtual machine. You can switch to Azure Monitor Agent (AMA), and then the CT extension pushes the software, files, and registry to AMA.-- **Multi-homing experience** ΓÇô Provides standardization of management from one central workspace. You can [transition from Log Analytics (LA) to AMA](/azure/azure-monitor/agents/azure-monitor-agent-migration) so that all VMs point to a single workspace for data collection and maintenance.
+- **Multi-homing experience** ΓÇô Provides standardization of management from one central workspace. You can [transition from Log Analytics (LA) to AMA](../../azure-monitor/agents/azure-monitor-agent-migration.md)
+so that all VMs point to a single workspace for data collection and maintenance.
- **Rules management** ΓÇô Uses [Data Collection Rules](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-public-preview/) to configure or customize various aspects of data collection. For example, you can change the frequency of file collection. ## Current limitations
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
# Supported regions for linked Log Analytics workspace > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
+> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon.
In Azure Automation, you can enable the Update Management, Change Tracking and Inventory, and Start/Stop VMs during off-hours features for your servers and virtual machines. These features have a dependency on a Log Analytics workspace, and therefore require linking the workspace with an Automation account. However, only certain regions are supported to link them together. In general, the mapping is *not* applicable if you plan to link an Automation account to a workspace that won't have these features enabled.
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
For at-scale migration of multiple Agent based Hybrid Workers, you can also use
#### [Bicep template](#tab/bicep-template)
-You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/azure/azure-resource-manager/bicep/overview)
+You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](../azure-resource-manager/bicep/overview.md).
```Bicep param automationAccount string
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md
Title: Troubleshoot Azure Automation Update Management issues
description: This article tells how to troubleshoot and resolve issues with Azure Automation Update Management. Previously updated : 06/10/2021 Last updated : 02/23/2023
Deploying updates to Linux by classification ("Critical and security updates") h
### KB2267602 is consistently missing
-KB2267602 is the [Windows Defender definition update](https://www.microsoft.com/wdsi/definitions). It's updated daily.
+KB2267602 is the [Windows Defender definition update](https://www.microsoft.com/en-us/wdsi/defenderupdates). It's updated daily.
## Next steps
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md
Title: Quickstart to learn how to use Azure App Configuration description: In this quickstart, create a Java Spring app with Azure App Configuration to centralize storage and management of application settings separate from your code. ms.devlang: java Previously updated : 05/02/2022 Last updated : 02/22/2023 #Customer intent: As a Java Spring developer, I want to manage all my app settings in one place. + # Quickstart: Create a Java Spring app with Azure App Configuration In this quickstart, you incorporate Azure App Configuration into a Java Spring app to centralize storage and management of application settings separate from your code.
In this quickstart, you incorporate Azure App Configuration into a Java Spring a
- Azure subscription - [create one for free](https://azure.microsoft.com/free/) - A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 11. - [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above.
+- A Spring Boot application. If you don't have one, create a Maven project with the [Spring Initializr](https://start.spring.io/). Be sure to select **Maven Project** and, under **Dependencies**, add the **Spring Web** dependency, and then select Java version 8 or higher.
## Create an App Configuration store [!INCLUDE [azure-app-configuration-create](../../includes/azure-app-configuration-create.md)]
-7. Select **Configuration Explorer** > **+ Create** > **Key-value** to add the following key-value pairs:
-
- | Key | Value |
- |||
- | /application/config.message | Hello |
-
- Leave **Label** and **Content Type** empty for now.
-
-8. Select **Apply**.
-
-## Create a Spring Boot app
+9. Select **Configuration Explorer** > **+ Create** > **Key-value** to add the following key-value pairs:
-To create a new Spring Boot project:
+ | Key | Value |
+ |||
+ | /application/config.message | Hello |
-1. Browse to the [Spring Initializr](https://start.spring.io).
+ Leave **Label** and **Content Type** empty for now.
-1. Specify the following options:
-
- - Generate a **Maven** project with **Java**.
- - Specify a **Spring Boot** version that's equal to or greater than 2.0.
- - Specify the **Group** and **Artifact** names for your application.
- - Add the **Spring Web** dependency.
-
-1. After you specify the previous options, select **Generate Project**. When prompted, download the project to a path on your local computer.
+10. Select **Apply**.
## Connect to an App Configuration store
-1. After you extract the files on your local system, your simple Spring Boot application is ready for editing. Locate the *pom.xml* file in the root directory of your app.
-
-1. Open the *pom.xml* file in a text editor, and add the Spring Cloud Azure Config starter to the list of `<dependencies>`:
-
- **Spring Boot 2.6**
+Now that you have an App Configuration store, you can use the Spring Cloud Azure Config starter to have your application communicate with the App Configuration store that you create.
- ```xml
- <dependency>
- <groupId>com.azure.spring</groupId>
- <artifactId>azure-spring-cloud-appconfiguration-config</artifactId>
- <version>2.6.0</version>
- </dependency>
- ```
+To install the Spring Cloud Azure Config starter module, add the following dependency to your *pom.xml* file:
- > [!NOTE]
- > If you need to support an older version of Spring Boot see our [old library](https://github.com/Azure/azure-sdk-for-jav).
+```xml
+<dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-appconfiguration-config</artifactId>
+ <version>2.11.0</version>
+</dependency>
+```
-1. Create a new Java file named *MessageProperties.java* in the package directory of your app. Add the following lines:
+> [!NOTE]
+> If you need to support an older version of Spring Boot, see our [old library](https://github.com/Azure/azure-sdk-for-jav).
- ```java
- package com.example.demo;
+### Code the application
- import org.springframework.boot.context.properties.ConfigurationProperties;
+To use the Spring Cloud Azure Config starter to have your application communicate with the App Configuration store that you create, configure the application by using the following steps.
- @ConfigurationProperties(prefix = "config")
- public class MessageProperties {
- private String message;
+1. Create a new Java file named *MessageProperties.java*, and add the following lines:
- public String getMessage() {
- return message;
- }
+ ```java
+ import org.springframework.boot.context.properties.ConfigurationProperties;
- public void setMessage(String message) {
- this.message = message;
- }
- }
- ```
+ @ConfigurationProperties(prefix = "config")
+ public class MessageProperties {
+ private String message;
-1. Create a new Java file named *HelloController.java* in the package directory of your app. Add the following lines:
+ public String getMessage() {
+ return message;
+ }
- ```java
- package com.example.demo;
+ public void setMessage(String message) {
+ this.message = message;
+ }
+ }
+ ```
- import org.springframework.web.bind.annotation.GetMapping;
- import org.springframework.web.bind.annotation.RestController;
+1. Create a new Java file named *HelloController.java*, and add the following lines:
- @RestController
- public class HelloController {
- private final MessageProperties properties;
+ ```java
+ import org.springframework.web.bind.annotation.GetMapping;
+ import org.springframework.web.bind.annotation.RestController;
- public HelloController(MessageProperties properties) {
- this.properties = properties;
- }
+ @RestController
+ public class HelloController {
+ private final MessageProperties properties;
- @GetMapping
- public String getMessage() {
- return "Message: " + properties.getMessage();
- }
- }
- ```
+ public HelloController(MessageProperties properties) {
+ this.properties = properties;
+ }
-1. Open the main application Java file, and add `@EnableConfigurationProperties` to enable this feature.
+ @GetMapping
+ public String getMessage() {
+ return "Message: " + properties.getMessage();
+ }
+ }
+ ```
- ```java
- import org.springframework.boot.context.properties.EnableConfigurationProperties;
+1. In the main application Java file, add `@EnableConfigurationProperties` to enable the *MessageProperties.java* configuration properties class to take effect and register it with the Spring container.
- @SpringBootApplication
- @EnableConfigurationProperties(MessageProperties.class)
- public class DemoApplication {
- public static void main(String[] args) {
- SpringApplication.run(DemoApplication.class, args);
- }
- }
- ```
+ ```java
+ import org.springframework.boot.context.properties.EnableConfigurationProperties;
-1. Open the auto-generated unit test and update to disable Azure App Configuration, or it will try to load from the service when runnings unit tests.
+ @SpringBootApplication
+ @EnableConfigurationProperties(MessageProperties.class)
+ public class DemoApplication {
+ public static void main(String[] args) {
+ SpringApplication.run(DemoApplication.class, args);
+ }
+ }
+ ```
- ```java
- package com.example.demo;
+1. Open the auto-generated unit test and update to disable Azure App Configuration, or it will try to load from the service when running unit tests.
- import org.junit.jupiter.api.Test;
- import org.springframework.boot.test.context.SpringBootTest;
+ ```java
+ import org.junit.jupiter.api.Test;
+ import org.springframework.boot.test.context.SpringBootTest;
- @SpringBootTest(properties = "spring.cloud.azure.appconfiguration.enabled=false")
- class DemoApplicationTests {
+ @SpringBootTest(properties = "spring.cloud.azure.appconfiguration.enabled=false")
+ class DemoApplicationTests {
- @Test
- void contextLoads() {
- }
+ @Test
+ void contextLoads() {
+ }
- }
- ```
+ }
+ ```
-1. Create a new file named `bootstrap.properties` under the resources directory of your app, and add the following line to the file.
+1. Create a new file named *bootstrap.properties* under the resources directory of your app, and add the following line to the file.
- ```CLI
- spring.cloud.azure.appconfiguration.stores[0].connection-string= ${APP_CONFIGURATION_CONNECTION_STRING}
- ```
+ ```properties
+ spring.cloud.azure.appconfiguration.stores[0].connection-string= ${APP_CONFIGURATION_CONNECTION_STRING}
+ ```
1. Set an environment variable named **APP_CONFIGURATION_CONNECTION_STRING**, and set it to the access key to your App Configuration store. At the command line, run the following command and restart the command prompt to allow the change to take effect:
- ```cmd
- setx APP_CONFIGURATION_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
- ```
+ ```cmd
+ setx APP_CONFIGURATION_CONNECTION_STRING "connection-string-of-your-app-configuration-store"
+ ```
- If you use Windows PowerShell, run the following command:
+ If you use Windows PowerShell, run the following command:
- ```azurepowershell
- $Env:APP_CONFIGURATION_CONNECTION_STRING = "connection-string-of-your-app-configuration-store"
- ```
+ ```azurepowershell
+ $Env:APP_CONFIGURATION_CONNECTION_STRING = "connection-string-of-your-app-configuration-store"
+ ```
- If you use macOS or Linux, run the following command:
+ If you use macOS or Linux, run the following command:
- ```cmd
- export APP_CONFIGURATION_CONNECTION_STRING='connection-string-of-your-app-configuration-store'
- ```
+ ```cmd
+ export APP_CONFIGURATION_CONNECTION_STRING='connection-string-of-your-app-configuration-store'
+ ```
-## Build and run the app locally
+### Build and run the app locally
1. Open command prompt to the root directory and run the following commands to build your Spring Boot application with Maven and run it.
- ```cmd
- mvn clean package
- mvn spring-boot:run
- ```
+ ```cmd
+ mvn clean package
+ mvn spring-boot:run
+ ```
-2. After your application is running, use *curl* to test your application, for example:
+1. After your application is running, use *curl* to test your application, for example:
- ```cmd
- curl -X GET http://localhost:8080/
- ```
+ ```cmd
+ curl -X GET http://localhost:8080/
+ ```
- You see the message that you entered in the App Configuration store.
+ You see the message that you entered in the App Configuration store.
## Clean up resources
azure-arc Tutorial Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-workload-management.md
+
+ Title: 'Tutorial: Workload management in a multi-cluster environment with GitOps'
+description: This tutorial walks through typical use-cases that Platform and Application teams face on a daily basis working with Kubernetes workloads in a multi-cluster environment.
+keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, ci/cd, devops"
++++ Last updated : 02/23/2023+++
+# Tutorial: Workload management in a multi-cluster environment with GitOps
+
+Enterprise organizations, developing cloud native applications, face challenges to deploy, configure and promote a great variety of applications and services across a fleet of Kubernetes clusters at scale. This fleet may include Azure Kubernetes Service (AKS) clusters as well as clusters running on other public cloud providers or in on-premises data centers that are connected to Azure through the Azure Arc.
+
+This tutorial walks you through typical scenarios of the workload deployment and configuration in a multi-cluster Kubernetes environment. First, you deploy a sample infrastructure with a few GitHub repositories and AKS clusters. Next, you work through a set of use cases where you act as different personas working in the same environment: the Platform Team and the Application Team.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Onboard a new application
+> * Schedule an application on the cluster types
+> * Promote an application across rollout environemnts
+> * Build and deploy an application
+> * Provide platform configurations
+> * Add a new cluster type to your environment
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+In order to successfully deploy the sample, you need:
+
+- [Azure CLI](/cli/azure/install-azure-cli).
+- [GitHub CLI](https://cli.github.com)
+- [Helm](https://helm.sh/docs/helm/helm_install/)
+- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl)
+
+## 1 - Deploy the sample
+
+To deploy the sample, run the following script:
+
+```bash
+mkdir kalypso && cd kalypso
+curl -fsSL -o deploy.sh https://raw.githubusercontent.com/microsoft/kalypso/main/deploy/deploy.sh
+chmod 700 deploy.sh
+./deploy.sh -c -p <prefix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+```
+
+This script may take 10-15 minutes to complete. After it's done, it reports the execution result in the output like this:
+
+```output
+Deployment is complete!
+
+Created repositories:
+ - https://github.com/eedorenko/kalypso-control-plane
+ - https://github.com/eedorenko/kalypso-gitops
+ - https://github.com/eedorenko/kalypso-app-src
+ - https://github.com/eedorenko/kalypso-app-gitops
+
+Created AKS clusters in kalypso-rg resource group:
+ - control-plane
+ - drone (Flux based workload cluster)
+ - large (ArgoCD based workload cluster)
+
+```
+
+> [!NOTE]
+> If something goes wrong with the deployment, you can delete the created resources with the following command:
+>
+> ```bash
+> ./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+> ```
+
+### Sample overview
+
+This deployment script created an infrastructure, shown in the following diagram:
++
+There are a few Platform Team repositories:
+
+- [Control Plane](https://github.com/microsoft/kalypso-control-plane): Contains a platform model defined with high level abstractions such as environments, cluster types, applications and services, mapping rules and configurations, and promotion workflows.
+- [Platform GitOps](https://github.com/microsoft/kalypso-gitops): Contains final manifests that represent the topology of the fleet, such as which cluster types are available in each environment, what workloads are scheduled on them, and what platform configuration values are set.
+- [Services Source](https://github.com/microsoft/kalypso-svc-src): Contains high-level manifest templates of sample dial-tone platform services.
+- [Services GitOps](https://github.com/microsoft/kalypso-svc-gitops): Contains final manifests of sample dial-tone platform services to be deployed across the clusters.
+
+The infrastructure also includes a couple of the Application Team repositories:
+
+- [Application Source](https://github.com/microsoft/kalypso-app-src): Contains a sample application source code, including Docker files, manifest templates and CI/CD workflows.
+- [Application GitOps](https://github.com/microsoft/kalypso-app-gitops): Contains final sample application manifests to be deployed to the deployment targets.
+
+The script created the following Azure Kubernetes Service (AKS) clusters:
+
+- `control-plane` - This cluster is a management cluster that doesn't run any workloads. The `control-plane` cluster hosts [Kalypso Scheduler](https://github.com/microsoft/kalypso-scheduler) operator that transforms high level abstractions from the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repository to the raw Kubernetes manifests in the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository.
+- `drone` - A sample workload cluster. This cluster has the [GitOps extension](conceptual-gitops-flux2.md) installed and it uses `Flux` to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository. For this sample, the `drone` cluster can represent an Azure Arc-enabled cluster or an AKS cluster with the Flux/GitOps extension.
+- `large` - A sample workload cluster. This cluster has `ArgoCD` installed on it to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository.
+
+### Explore Control Plane
+
+The `control plane` repository contains three branches: `main`, `dev` and `stage`. The `dev` and `stage` branches contain configurations that are specific for `Dev` and `Stage` environments. On the other hand, the `main` branch doesn't represent any specific environment. The content of the `main` branch is common and used by all environments in the fleet. Any change to the `main` branch is a subject to be promoted across environments. For example, a new application or a new template can be promoted to the `Stage` environment only after successful testing on the `Dev` environment.
+
+The `main` branch:
+
+|Folder|Description|
+||--|
+|.github/workflows| Contains GitHub workflows that implement the promotional flow.|
+|.environments| Contains a list of environments with pointers to the branches with the environment configurations.|
+|templates| Contains manifest templates for various reconcilers (for example, Flux and ArgoCD) and a template for the workload namespace.|
+|workloads| Contains a list of onboarded applications and services with pointers to the corresponding GitOps repositories.|
+
+The `dev` and `stage` branches:
+
+|Item|Description|
+|-|--|
+|cluster-types| Contains a list of available cluster types in the environment. The cluster types are grouped in custom subfolders. Each cluster type is marked with a set of labels. It specifies a reconciler type that it uses to fetch the manifests from GitOps repositories. The subfolders also contain a number of config maps with the platform configuration values available on the cluster types.|
+|configs/dev-config.yaml| Contains config maps with the platform configuration values applicable for all cluster types in the environment.|
+|scheduling| Contains scheduling policies that map workload deployment targets to the cluster types in the environment.|
+|base-repo.yaml| A pointer to the place in the `Control Plane` repository (`main`) from where the scheduler should take templates and workload registrations.|
+|gitops-repo.yaml| A pointer to the place in the `Platform GitOps` repository to where the scheduler should PR generated manifests.|
+
+> [!TIP]
+> The folder structure in the `Control Plane` repository doesn't really matter. This tutorial provides a sample of how you can organize files in the repository, but feel free to do it in your own preferred way. The scheduler is interested in the content of the files, rather than where the files are located.
+
+## 2 - Platform Team: Onboard a new application
+
+The Application Team runs their software development lifecycle. They build their application and promote it across environments. They're not aware of what cluster types are available in the fleet and where their application will be deployed. But they do know that they want to deploy their application in `Dev` environment for functional and performance testing and in `Stage` environment for UAT testing.
+
+The Application Team describes this intention in the [workload](https://github.com/microsoft/kalypso-app-src/blob/main/workload/workload.yaml) file in the [Application Source](https://github.com/microsoft/kalypso-app-src) repository:
+
+```yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: Workload
+metadata:
+ name: hello-world-app
+ labels:
+ type: application
+ family: force
+spec:
+ deploymentTargets:
+ - name: functional-test
+ labels:
+ purpose: functional-test
+ edge: "true"
+ environment: dev
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: dev
+ path: ./functional-test
+ - name: performance-test
+ labels:
+ purpose: performance-test
+ edge: "false"
+ environment: dev
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: dev
+ path: ./performance-test
+ - name: uat-test
+ labels:
+ purpose: uat-test
+ environment: stage
+ manifests:
+ repo: https://github.com/microsoft/kalypso-app-gitops
+ branch: stage
+ path: ./uat-test
+```
+
+This file contains a list of three deployment targets. These targets are marked with custom labels and point to the folders in [Application GitOps](https://github.com/microsoft/kalypso-app-gitops) repository where the Application Team generates application manifests for each deployment target.
+
+With this file, Application Team requests Kubernetes compute resources from the Platform Team. In response, the Platform Team must register the application in the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repo.
+
+To register the application, open a terminal and use the following script:
+
+```bash
+export org=<github org>
+export prefix=<prefix>
+
+# clone the control-plane repo
+git clone https://github.com/$org/$prefix-control-plane control-plane
+cd control-plane
+
+# create workload registration file
+
+cat <<EOF >workloads/hello-world-app.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: WorkloadRegistration
+metadata:
+ name: hello-world-app
+ labels:
+ type: application
+spec:
+ workload:
+ repo: https://github.com/$org/$prefix-app-src
+ branch: main
+ path: workload/
+ workspace: kaizen-app-team
+EOF
+
+git add .
+git commit -m 'workload registration'
+git push
+```
+
+> [!NOTE]
+> For simplicity, this tutorial pushes changes directly to `main`. In practice, you'd create a pull request to submit the changes.
+
+With that in place, the application is onboarded in the control plane. But the control plane still doesn't know how to map the application deployment targets to the cluster types in the fleet.
+
+### Define application scheduling policy on Dev
+
+The Platform Team must define how the application deployment targets will be scheduled on cluster types in the `Dev` environment. To do this, submit scheduling policies for the `functional-test` and `performance-test` deployment targets with the following script:
+
+```bash
+# Switch to dev branch (representing Dev environemnt) in the control-plane folder
+git checkout dev
+mkdir -p scheduling/kaizen
+
+# Create a scheduling policy for the functional-test deployment target
+cat <<EOF >scheduling/kaizen/functional-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: functional-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: functional-test
+ edge: "true"
+ clusterTypeSelector:
+ labelSelector:
+ matchLabels:
+ restricted: "true"
+ edge: "true"
+EOF
+
+# Create a scheduling policy for the performance-test deployment target
+cat <<EOF >scheduling/kaizen/performance-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: performance-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: performance-test
+ edge: "false"
+ clusterTypeSelector:
+ labelSelector:
+ matchLabels:
+ size: large
+EOF
+
+git add .
+git commit -m 'application scheduling policies'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The first policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: functional-test` and `edge: "true"` should be scheduled on all environment cluster types that are marked with label `restricted: "true"`. You can treat a workspace as a group of applications produced by an application team.
+
+The second policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: performance-test` and `edge: "false"` should be scheduled on all environment cluster types that are marked with label `size: "large"`.
+
+This push to the `dev` branch triggers the scheduling process and creates a PR to the `dev` branch in the `Platform GitOps` repository:
++
+Besides `Promoted_Commit_id`, which is just tracking information for the promotion CD flow, the PR contains assignment manifests. The `functional-test` deployment target is assigned to the `drone` cluster type, and the `performance-test` deployment target is assigned to the `large` cluster type. Those manifests will land in `drone` and `large` folders that contain all assignments to these cluster types in the `Dev` environment.
+
+The `Dev` environment also includes `command-center` and `small` cluster types:
+
+ :::image type="content" source="media/tutorial-workload-management/dev-cluster-types.png" alt-text="Screenshot showing cluster types in the Dev environment.":::
+
+However, only the `drone` and `large` cluster types were selected by the scheduling policies that you defined.
+
+### Understand deployment target assignment manifests
+
+Before you continue, take a closer look at the generated assignment manifests for the `functional-test` deployment target. There are `namespace.yaml`, `config.yaml` and `reconciler.yaml` manifest files.
+
+`namespace.yaml` defines a namespace that will be created on any `drone` cluster where the `hello-world` application runs.
+
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ labels:
+ deploymentTarget: hello-world-app-functional-test
+ environment: dev
+ someLabel: some-value
+ workload: hello-world-app
+ workspace: kaizen-app-team
+ name: dev-kaizen-app-team-hello-world-app-functional-test
+```
+
+`config.yaml` contains all platform configuration values available on any `drone` cluster that the application can use in the `Dev` environment.
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: platform-config
+ namespace: dev-kaizen-app-team-hello-world-app-functional-test
+data:
+ CLUSTER_NAME: Drone
+ DATABASE_URL: mysql://restricted-host:3306/mysqlrty123
+ ENVIRONMENT: Dev
+ REGION: East US
+ SOME_COMMON_ENVIRONMENT_VARIABLE: "false"
+```
+
+`reconciler.yaml` contains Flux resources that a `drone` cluster uses to fetch application manifests, prepared by the Application Team for the `functional-test` deployment target.
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1beta2
+kind: GitRepository
+metadata:
+ name: hello-world-app-functional-test
+ namespace: flux-system
+spec:
+ interval: 30s
+ ref:
+ branch: dev
+ secretRef:
+ name: repo-secret
+ url: https://github.com/<github org>/<prefix>-app-gitops
+
+apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
+kind: Kustomization
+metadata:
+ name: hello-world-app-functional-test
+ namespace: flux-system
+spec:
+ interval: 30s
+ path: ./functional-test
+ prune: true
+ sourceRef:
+ kind: GitRepository
+ name: hello-world-app-functional-test
+ targetNamespace: dev-kaizen-app-team-hello-world-app-functional-test
+```
+
+> [!NOTE]
+> The `control plane` defines that the `drone` cluster type uses `Flux` to reconcile manifests from the application GitOps repositories. The `large` cluster type, on the other hand, reconciles manifests with `ArgoCD`. Therefore `reconciler.yaml` for the `performance-test` deployment target will look differently and contain `ArgoCD` resources.
+
+### Promote application to Stage
+
+Once you approve and merge the PR to the `Platform GitOps` repository, the `drone` and `large` AKS clusters that represent corresponding cluster types start fetching the assignment manifests. The `drone` cluster has [GitOps extension](conceptual-gitops-flux2.md) installed, pointing to the `Platform GitOps` repository. It reports its `compliance` status to Azure Resource Graph:
++
+The PR merging event starts a GitHub workflow `checkpromote` in the `control plane` repository. This workflow waits until all clusters with the [GitOps extension](conceptual-gitops-flux2.md) installed that are looking at the `dev` branch in the `Platform GitOps` repository are compliant with the PR commit. In this tutorial, the only such cluster is `drone`.
++
+Once the `checkpromote` is successful, it starts the `cd` workflow that promotes the change (application registration) to the `Stage` environment. For better visibility, it also updates the git commit status in the `control plane` repository:
+
+![Git commit status deploying to dev](media/tutorial-workload-management/dev-git-commit-status.png)
+ :::image type="content" source="media/tutorial-workload-management/dev-git-commit-status.png" alt-text="Screenshot showing git commit status deploying to dev.":::
+
+> [!NOTE]
+> If the `drone` cluster fails to reconcile the assignment manifests for any reason, the promotion flow will fail. The commit status will be marked as failed, and the application registration will not be promoted to the `Stage` environment.
+
+Next, configure a scheduling policy for the `uat-test` deployment target in the stage environment:
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+mkdir -p scheduling/kaizen
+
+# Create a scheduling policy for the uat-test deployment target
+cat <<EOF >scheduling/kaizen/uat-test-policy.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: SchedulingPolicy
+metadata:
+ name: uat-test-policy
+spec:
+ deploymentTargetSelector:
+ workspace: kaizen-app-team
+ labelSelector:
+ matchLabels:
+ purpose: uat-test
+ clusterTypeSelector:
+ labelSelector: {}
+EOF
+
+git add .
+git commit -m 'application scheduling policies'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The policy states that all deployment targets from the `kaizen-app-team` workspace marked with labels `purpose: uat-test` should be scheduled on all cluster types defined in the environment.
+
+Pushing this policy to the `stage` branch triggers the scheduling process, which creates a PR with the assignment manifests to the `Platform GitOps` repository, similar to those for the `Dev` environment.
+
+As in the case with the `Dev` environment, after reviewing and merging the PR to the `Platform GitOps` repository, the `checkpromote` workflow in the `control plane` repository waits until clusters with the [GitOps extension](conceptual-gitops-flux2.md) (`drone`) reconcile the assignment manifests.
+
+ :::image type="content" source="media/tutorial-workload-management/check-promote-to-stage.png" alt-text="Screenshot showing promotion to stage.":::
+
+On successful execution, the commit status is updated.
++
+## 3 - Application Dev Team: Build and deploy application
+
+The Application Team regularly submits pull requests to the `main` branch in the `Application Source` repository. Once a PR is merged to `main`, it starts a CI/CD workflow. In this tutorial, the workflow will be started manually.
+
+ Go to the `Application Source` repository in GitHub. On the `Actions` tab, select `Run workflow`.
++
+The workflow performs the following actions:
+
+- Builds the application Docker image and pushes it to the GitHub repository package.
+- Generates manifests for the `functional-test` and `performance-test` deployment targets. It uses configuration values from the `dev-configs` branch. The generated manifests are added to a pull request and auto-merged in the `dev` branch.
+- Generates manifests for the `uat-test` deployment target. It uses configuration values from the `stage-configs` branch.
++
+The generated manifests are added to a pull request to the `stage` branch waiting for approval:
++
+To test the application manually on the `Dev` environment before approving the PR to the `Stage` environment, first verify how the `functional-test` application instance works on the `drone` cluster:
+
+```bash
+kubectl port-forward svc/hello-world-service -n dev-kaizen-app-team-hello-world-app-functional-test 9090:9090 --context=drone
+
+# output:
+# Forwarding from 127.0.0.1:9090 -> 9090
+# Forwarding from [::1]:9090 -> 9090
+
+```
+
+While this command is running, open `localhost:9090` in your browser. You'll see the following greeting page:
++
+The next step is to check how the `performance-test` instance works on the `large` cluster:
+
+```bash
+kubectl port-forward svc/hello-world-service -n dev-kaizen-app-team-hello-world-app-performance-test 8080:8080 --context=large
+
+# output:
+# Forwarding from 127.0.0.1:8080 -> 8080
+# Forwarding from [::1]:8080 -> 8080
+
+```
+
+This time, use `8080` port and open `localhost:8080` in your browser.
+
+Once you're satisfied with the `Dev` environment, approve and merge the PR to the `Stage` environment. After that, test the `uat-test` application instance in the `Stage` environment on both clusters.
+
+Run the following command for the `drone` cluster and open `localhost:8001` in your browser:
+
+```bash
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8001:8000 --context=drone
+```
+
+Run the following command for the `large` cluster and open `localhost:8002` in your browser:
+
+ ```bash
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8002:8000 --context=large
+```
+
+> [!NOTE]
+> It may take up to three minutes to reconcile the changes from the application GitOps repository on the `large` cluster.
+
+The application instance on the `large` cluster shows the following greeting page:
+
+ :::image type="content" source="media/tutorial-workload-management/stage-greeting-page.png" alt-text="Screenshot showing the greeting page on stage.":::
+
+## 4 - Platform Team: Provide platform configurations
+
+Applications in the fleet grab the data from the very same database in both `Dev` and `Stage` environments. Let's change it and configure `west-us` clusters to provide a different database url for the applications working in the `Stage` environment:
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+
+# Update a config map with the configurations for west-us clusters
+cat <<EOF >cluster-types/west-us/west-us-config.yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: west-us-config
+ labels:
+ platform-config: "true"
+ region: west-us
+data:
+ REGION: West US
+ DATABASE_URL: mysql://west-stage:8806/mysql2
+EOF
+
+git add .
+git commit -m 'database url configuration'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+The scheduler scans all config maps in the environment and collects values for each cluster type based on label matching. Then, it puts a `platform-config` config map in every deployment target folder in the `Platform GitOps` repository. The `platform-config` config map contains all of the platform configuration values that the workload can use on this cluster type in this environment.
+
+In a few seconds, a new PR to the `stage` branch in the `Platform GitOps` repository appears:
++
+Approve the PR and merge it.
+
+The `large` cluster is handled by ArgoCD, which, by default, is configured to reconcile every three minutes. This cluster doesn't report its compliance state to Azure like the clusters such as `drone` that have the [GitOps extension](conceptual-gitops-flux2.md). However, you can still monitor the reconciliation state on the cluster with ArgoCD UI.
+
+To access the ArgoCD UI on the `large` cluster, run the following command:
+
+```bash
+# Get ArgoCD username and password
+echo "ArgoCD username: admin, password: $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" --context large| base64 -d)"
+# output:
+# ArgoCD username: admin, password: eCllTELZdIZfApPL
+
+kubectl port-forward svc/argocd-server 8080:80 -n argocd --context large
+```
+
+Next, open `localhost:8080` in your browser and provide the username and password printed by the script. You'll see a web page similar to this one:
+
+ :::image type="content" source="media/tutorial-workload-management/argocd-ui.png" alt-text="Screenshot showing the Argo CD user interface web page." lightbox="media/tutorial-workload-management/argocd-ui.png":::
+
+Select the `stage` tile to see more details on the reconciliation state from the `stage` branch to this cluster. You can select the `SYNC` buttons to force the reconciliation and speed up the process.
+
+Once the new configuration has arrived to the cluster, check the `uat-test` application instance at `localhost:8002` after
+running the following commands:
+
+```bash
+kubectl rollout restart deployment hello-world-deployment -n stage-kaizen-app-team-hello-world-app-uat-test --context=large
+kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8002:8000 --context=large
+```
+
+You'll see the updated database url:
++
+## 5 - Platform Team: Add cluster type to environment
+
+Currently, only `drone` and `large` cluster types are included in the `Stage` environment. Let's include the `small` cluster type to `Stage` as well. Even though there's no physical cluster representing this cluster type, you can see how the scheduler reacts to this change.
+
+```bash
+# Switch to stage branch (representing Stage environemnt) in the control-plane folder
+git checkout stage
+
+# Add "small" cluster type in west-us region
+mkdir -p cluster-types/west-us/small
+cat <<EOF >cluster-types/west-us/small/small-cluster-type.yaml
+apiVersion: scheduler.kalypso.io/v1alpha1
+kind: ClusterType
+metadata:
+ name: small
+ labels:
+ region: west-us
+ size: small
+spec:
+ reconciler: argocd
+ namespaceService: default
+EOF
+
+git add .
+git commit -m 'add new cluster type'
+git config pull.rebase false
+git pull --no-edit
+git push
+```
+
+In a few seconds, the scheduler submits a PR to the `Platform GitOps` repository. According to the `uat-test-policy` that you created, it assigns the `uat-test` deployment target to the new cluster type, as it's supposed to work on all available cluster types in the environment.
++
+## Clean up resources
+When no longer needed, delete the resources that you created for this tutorial. To do so, run the following command:
+
+```bash
+# In kalypso folder
+./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2>
+```
+
+## Next steps
+
+In this tutorial, you have performed tasks for a few of the most common workload management scenarios in a multi-cluster Kubernetes environment. There are many other scenarios you may want to explore. Continue to use the sample and see how you can implement use cases that are most common in your daily activities.
+
+To understand the underlying concepts and mechanics deeper, refer to the following resources:
+
+> [!div class="nextstepaction"]
+> - [Workload Management in Multi-cluster environment with GitOps](https://github.com/microsoft/kalypso)
+
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
### Fixed -- The extension service now correctly restarts when the Azure Connected Machine agent is being upgraded by Update Management Center
+- The extension service now correctly restarts when the Azure Connected Machine agent is upgraded by Update Management Center
- Resolved issues with the hybrid connectivity component that could result in the "himds" service crashing, the server showing as "disconnected" in Azure, and connectivity issues with Windows Admin Center and SSH - Improved handling of resource move scenarios that could impact Windows Admin Center and SSH connectivity - Improved reliability when changing the [agent configuration mode](security-overview.md#local-agent-security-controls) from "monitor" mode to "full" mode. - Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Sentinel DNS extension to improve log collection reliability-- Tenant IDs are now validated during onboarding for correctness
+- Tenant IDs are better validated when connecting the server
## Version 1.26 - January 2023 > [!NOTE]
-> Version 1.26 is only available for Linux operating systems. The most recent Windows agent version is 1.25.
+> Version 1.26 is only available for Linux operating systems.
### Fixed
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 11/18/2022 Last updated : 01/25/2023 # Connected Machine agent prerequisites
-This topic describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have additional requirements.
+This topic describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have more requirements.
## Supported environments
Azure Arc-enabled servers support the installation of the Connected Machine agen
* Azure Stack HCI * Other cloud environments
-Azure Arc-enabled servers do not support installing the agent on virtual machines running in Azure, or on virtual machines running on Azure Stack Hub or Azure Stack Edge, as they are already modeled as Azure VMs and able to be managed directly in Azure.
+You shouldn't install Azure Arc on virtual machines hosted in Azure, Azure Stack Hub, or Azure Stack Edge, as they already have similar capabilities. You can, however, [use an Azure VM to simulate an on-premises environment](plan-evaluate-on-azure-virtual-machine.md) for testing purposes, only.
+
+Take extra care when using Azure Arc on systems that are:
+
+* Cloned
+* Restored from backup as a second instance of the server
+* Used to create a "golden image" from which other virtual machines are created
+
+If two agents use the same configuration, you will encounter inconsistent behaviors when both agents try to act as one Azure resource. The best practice for these situations is to use an automation tool or script to onboard the server to Azure Arc after it has been cloned, restored from backup, or created from a golden image.
> [!NOTE]
-> For additional information on using Arc-enabled servers in VMware environments, see the [VMware FAQ](vmware-faq.md).
+> For additional information on using Azure Arc-enabled servers in VMware environments, see the [VMware FAQ](vmware-faq.md).
## Supported operating systems
-The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent. Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, are not supported operating environments.
+Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. Azure Arc does not run on x86 (32-bit) or ARM-based architectures.
* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported
- * Azure Editions are supported when running as a virtual machine on Azure Stack HCI
+ * Azure Editions are supported on Azure Stack HCI
+* Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance))
* Windows IoT Enterprise * Azure Stack HCI * Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS
The following versions of the Windows and Linux operating system are officially
* Amazon Linux 2 * Oracle Linux 7 and 8
-> [!NOTE]
-> On Linux, Azure Arc-enabled servers install several daemon processes. We only support using systemd to manage these processes. In some environments, systemd may not be installed or available, in which case Arc-enabled servers are not supported, even if the distribution is otherwise supported. These environments include **Windows Subsystem for Linux** (WSL) and most container-based systems, such as Kubernetes or Docker. The Azure Connected Machine agent can be installed on the node that runs the containers but not inside the containers themselves.
+### Client operating system guidance
-> [!WARNING]
-> If the Linux hostname or Windows computer name uses a reserved word or trademark, attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
+The Azure Arc service and Azure Connected Machine Agent are supported on Windows 10 and 11 client operating systems only when using those computers in a server-like environment. That is, the computer should always be:
-> [!NOTE]
-> While Azure Arc-enabled servers support Amazon Linux, the following features are not supported by this distribution:
->
-> * The Dependency agent used by Azure Monitor VM insights
-> * Azure Automation Update Management
+* Connected to the internet
+* Connected to a power source
+* Powered on
+
+For example, a computer running Windows 11 that's responsible for digital signage, point-of-sale solutions, and general back office management tasks is a good candidate for Azure Arc. End-user productivity machines, such as a laptop, which may go offline for long periods of time, shouldn't use Azure Arc and instead should consider [Microsoft Intune](/mem/intune) or [Microsoft Endpoint Configuration Manager](/mem/configmgr).
+
+### Short-lived servers and virtual desktop infrastructure
+
+Microsoft doesn't recommend running Azure Arc on short-lived (ephemeral) servers or virtual desktop infrastructure (VDI) VMs. Azure Arc is designed for long-term management of servers and isn't optimized for scenarios where you are regularly creating and deleting servers. For example, Azure Arc doesn't know if the agent is offline due to planned system maintenance or if the VM was deleted, so it won't automatically clean up server resources that stopped sending heartbeats. As a result, you could encounter a conflict if you re-create the VM with the same name and there's an existing Azure Arc resource with the same name.
+
+[Azure Virtual Desktop on Azure Stack HCI](../../virtual-desktop/azure-stack-hci-overview.md) doesn't use short-lived VMs and supports running Azure Arc in the desktop VMs.
## Software requirements Windows operating systems:
-* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
-* Windows PowerShell 4.0 or later is required. No action is required for Windows Server 2012 R2 and above. For Windows Server 2008 R2 SP1, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+* NET Framework 4.6 or later. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
+* Windows PowerShell 4.0 or later (already included with Windows Server 2012 R2 and later). For Windows Server 2008 R2 SP1, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
Linux operating systems:
Linux operating systems:
## Required permissions
-The following Azure built-in roles are required for different aspects of managing connected machines:
+You'll need the following Azure built-in roles for different aspects of managing connected machines:
-* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group in which the machines will be managed.
+* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group where you're managing the servers.
* To read, modify, and delete a machine, you must have the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) role for the resource group.
-* To select a resource group from the drop-down list when using the **Generate script** method, as well as the permissions needed to onboard machines, listed above, you must additionally have the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role which includes **Reader** access).
+* To select a resource group from the drop-down list when using the **Generate script** method, you'll also need the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role that includes **Reader** access).
## Azure subscription and service limits There are no limits to the number of Azure Arc-enabled servers you can register in any single resource group, subscription or tenant.
-Each Azure Arc-enabled server is associated with an Azure Active Directory object and will count against your directory quota. See [Azure AD service limits and restrictions](../../active-directory/enterprise-users/directory-service-limits-restrictions.md) for information about the maximum number of objects you can have in an Azure AD directory.
+Each Azure Arc-enabled server is associated with an Azure Active Directory object and counts against your directory quota. See [Azure AD service limits and restrictions](../../active-directory/enterprise-users/directory-service-limits-restrictions.md) for information about the maximum number of objects you can have in an Azure AD directory.
## Azure resource providers
To use Azure Arc-enabled servers, the following [Azure resource providers](../..
* **Microsoft.HybridConnectivity** * **Microsoft.AzureArcData** (if you plan to Arc-enable SQL Servers)
-If these resource providers are not already registered, you can register them using the following commands:
+You can register the resource providers using the following commands:
Azure PowerShell:
Set-AzContext -SubscriptionId [subscription you want to onboard]
Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity
+Register-AzResourceProvider -ProviderNamespace Microsoft.AzureArcData
``` Azure CLI:
az account set --subscription "{Your Subscription Name}"
az provider register --namespace 'Microsoft.HybridCompute' az provider register --namespace 'Microsoft.GuestConfiguration' az provider register --namespace 'Microsoft.HybridConnectivity'
+az provider register --namespace 'Microsoft.AzureArcData'
``` You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal).
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
public HttpResponseMessage httpStartAndWait(
try { String timeoutString = req.getQueryParameters().get("timeout"); Integer timeoutInSeconds = Integer.parseInt(timeoutString);
- OrchestrationMetadata orchestration = client.waitForInstanceStart(
+ OrchestrationMetadata orchestration = client.waitForInstanceCompletion(
instanceId, Duration.ofSeconds(timeoutInSeconds), true /* getInputsAndOutputs */);
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 02/13/2023 Last updated : 02/23/2023 # Compare Azure Government and global Azure
The following Azure Database for PostgreSQL **features aren't currently availabl
- Advanced Threat Protection - Backup with long-term retention
-### [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview)
-
-The following Azure SQL Managed Instance **features aren't currently available** in Azure Government:
--- Long-term backup retention- ## Developer tools This section outlines variations and considerations when using Developer tools in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=load-testing,app-configuration,devtest-lab,lab-services,azure-devops&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
azure-maps How To Create Data Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md
To create a data registry:
1. Once you have the body of your HTTP request ready, execute the following **HTTP PUT request**: ```http
- https://us.atlas.microsoft.com/dataRegistries/{udid}?api-version=2022-12-01-preview&subscription-key={Azure-Maps-Subscription-key}
+ https://us.atlas.microsoft.com/dataRegistries/{udid}?api-version=2022-12-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
```
Once you've uploaded one or more files to an Azure storage account, created and
Use the `udid` to get the content of a file registered in an Azure Maps account: ```http
-https://us.atlas.microsoft.com/dataRegistries/{udid}/content?api-version=2022-12-01-preview&subscription-key={Azure-Maps-Subscription-key}
+https://us.atlas.microsoft.com/dataRegistries/{udid}/content?api-version=2022-12-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}
``` The contents of the file will appear in the body of the response. For example, a text based GeoJSON file will appear similar to the following example:
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
The Azure Maps Creator [wayfinding service][wayfinding service] allows you to na
> > - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services][how to manage access to creator services]. > - In the URL examples in this article you will need to:
-> - Replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+> - Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
> - Replace `{datasetId`} with your `datasetId`. For more information, see the [Check the dataset creation status][check dataset creation status] section of the *Use Creator to create indoor maps* tutorial. ## Create a routeset
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Azure Maps Creator enables users to import their indoor map data in GeoJSON form
>[!IMPORTANT] > > - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services).
-> - In the URL examples in this article you will need to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+> - In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
## Create dataset using the GeoJSON package
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
To get a static image with custom pins and labels:
4. Select the **GET** HTTP method.
-5. Enter the following URL (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```HTTP https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12&center=-73.98,%2040.77&pins=custom%7Cla15+50%7Cls12%7Clc003b61%7C%7C%27CentralPark%27-73.9657974+40.781971%7C%7Chttps%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FAzureMapsCodeSamples%2Fmaster%2FAzureMapsCodeSamples%2FCommon%2Fimages%2Ficons%2Fylw-pushpin.png
To upload pins and path data:
4. Select the **POST** HTTP method.
-5. Enter the following URL (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson
To check the status of the data upload and retrieve its unique ID (`udid`):
4. Select the **GET** HTTP method.
-5. Enter the `status URL` you copied in [Upload pins and path data](#upload-pins-and-path-data). The request should look like the following URL (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `status URL` you copied in [Upload pins and path data](#upload-pins-and-path-data). The request should look like the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```HTTP https://us.atlas.microsoft.com/mapData/operations/{statusUrl}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
To render the uploaded pins and path data on the map:
4. Select the **GET** HTTP method.
-5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `udid` with the `udid` of the uploaded data):
+5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded data):
```HTTP https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12&center=-73.96682739257812%2C40.78119135317995&pins=default|la-35+50|ls12|lc003C62|co9B2F15||'Times Square'-73.98516297340393 40.758781646381024|'Central Park'-73.96682739257812 40.78119135317995&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.30||udid-{udId}
To render a polygon with color and opacity:
4. Select the **GET** HTTP method.
-5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500&center=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063
To render a circle and pushpins with custom labels:
4. Select the **GET** HTTP method.
-5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700&center=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key}
azure-maps How To Request Elevation Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-elevation-data.md
To request elevation data in raster tile format using the Postman app:
``` >[!Important]
- >For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+ >For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
5. Select the **Send** button.
To create the request:
3. Enter a **Request name** for the request.
-4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```http https://atlas.microsoft.com/elevation/point/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&points=-73.998672,40.714728|150.644,-34.397
To create the request:
} ```
-6. Now, we'll call the [Post Data for Points API](/rest/api/maps/elevation/postdataforpoints) to get elevation data for the same two points. On the **Builder** tab, select the **POST** HTTP method and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+6. Now, we'll call the [Post Data for Points API](/rest/api/maps/elevation/postdataforpoints) to get elevation data for the same two points. On the **Builder** tab, select the **POST** HTTP method and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```http https://atlas.microsoft.com/elevation/point/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0
To create the request:
3. Enter a **Request name**.
-4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```http https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&lines=-73.998672,40.714728|150.644,-34.397&samples=5
To create the request:
} ```
-9. Now, we'll call the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. On the **Builder** tab, select the **POST** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+9. Now, we'll call the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. On the **Builder** tab, select the **POST** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```http https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&samples=5
To create the request:
3. Enter a **Request name**.
-4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```http https://atlas.microsoft.com/elevation/lattice/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&bounds=-121.66853362143818, 46.84646479863713,-121.65853362143818, 46.85646479863713&rows=2&columns=3
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md
In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weat
1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
```http https://atlas.microsoft.com/weather/currentConditions/json?api-version=1.0&query=47.60357,-122.32945&subscription-key={Your-Azure-Maps-Subscription-key}
In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/w
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
```http https://atlas.microsoft.com/weather/severe/alerts/json?api-version=1.0&query=41.161079,-104.805450&subscription-key={Your-Azure-Maps-Subscription-key}
In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
```http https://atlas.microsoft.com/weather/forecast/daily/json?api-version=1.0&query=47.60357,-122.32945&duration=5&subscription-key={Your-Azure-Maps-Subscription-key}
In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
```http https://atlas.microsoft.com/weather/forecast/hourly/json?api-version=1.0&query=47.60357,-122.32945&duration=12&subscription-key={Your-Azure-Maps-Subscription-key}
In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
```http https://atlas.microsoft.com/weather/forecast/minute/json?api-version=1.0&query=47.60357,-122.32945&interval=15&subscription-key={Your-Azure-Maps-Subscription-key}
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md
In this example, we'll use the Azure Maps [Get Search Address API](/rest/api/map
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. In this request, we're searching for a specific address: `400 Braod St, Seattle, WA 98109`. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. In this request, we're searching for a specific address: `400 Braod St, Seattle, WA 98109`. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
```http https://atlas.microsoft.com/search/address/json?&subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&language=en-US&query=400 Broad St, Seattle, WA 98109
In this example, we'll use Fuzzy Search to search the entire world for `pizza`.
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
```http https://atlas.microsoft.com/search/fuzzy/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=pizza
In this example, we'll be making reverse searches using a few of the optional pa
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. The request should look like the following URL:
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. The request should look like the following URL:
```http https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700&number=1
In this example, we'll search for a cross street based on the coordinates of an
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. The request should look like the following URL:
+2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. The request should look like the following URL:
```http https://atlas.microsoft.com/search/address/reverse/crossstreet/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
In the next section, we'll set the occupancy *state* of office `UNIT26` to `true
3. Enter a **Request name** for the request, such as *POST Data Upload*.
-4. Enter the following URL to the [Feature Update States API](/rest/api/maps/v2/feature-state/update-states) (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `statesetId` with the `statesetId`):
+4. Enter the following URL to the [Feature Update States API](/rest/api/maps/v2/feature-state/update-states) (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `statesetId` with the `statesetId`):
```http https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
To add the JavaScript:
```
-3. Add the following initialization code. Make sure to replace `<Your Azure Maps Key>` with your primary subscription key.
+3. Add the following initialization code. Make sure to replace `<Your Azure Maps Key>` with your Azure Maps subscription key.
> [!Tip] > When you use pop-up windows, it's best to create a single `Popup` instance and reuse the instance by updating its content and position. For every `Popup`instance you add to your code, multiple DOM elements are added to the page. The more DOM elements there are on a page, the more things the browser has to keep track of. If there are too many items, the browser might become slow.
azure-maps Tutorial Creator Feature Stateset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-feature-stateset.md
This tutorial uses the [Postman](https://www.postman.com/) application, but you
> > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). > * In the URL examples in this article you will need to replace:
-> * `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+> * `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
> * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status](tutorial-creator-indoor-maps.md#check-the-dataset-creation-status) section of the *Use Creator to create indoor maps* tutorial ## Create a feature stateset
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
This tutorial uses the [Postman](https://www.postman.com/) application, but you
>[!IMPORTANT] > > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services).
-> * In the URL examples in this article you will need to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+> * In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
## Upload a Drawing package
To convert a drawing package:
4. Select the **POST** HTTP method.
-5. Enter the following URL to the [Conversion Service](/rest/api/maps/v2/conversion/convert) (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `udid` with the `udid` of the uploaded package):
+5. Enter the following URL to the [Conversion Service](/rest/api/maps/v2/conversion/convert) (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded package):
```http https://us.atlas.microsoft.com/conversions?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&udid={udid}&inputType=DWG&outputOntology=facility-2.0
azure-maps Tutorial Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-wfs.md
This tutorial uses the [Postman](https://www.postman.com/) application, but you
> > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). > * In the URL examples in this article you will need to replace:
-> * `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+> * `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
> * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status](tutorial-creator-indoor-maps.md#check-the-dataset-creation-status) section of the *Use Creator to create indoor maps* tutorial ## Query for feature collections
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
To upload the geofencing GeoJSON data:
4. Select the **POST** HTTP method.
-5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson
To check the status of the GeoJSON data and retrieve its unique ID (`udid`):
4. Select the **GET** HTTP method.
-5. Enter the `status URL` you copied in [Upload Geofencing GeoJSON data](#upload-geofencing-geojson-data). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `status URL` you copied in [Upload Geofencing GeoJSON data](#upload-geofencing-geojson-data). The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```HTTP https://us.atlas.microsoft.com/mapData/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
To retrieve content metadata:
4. Select the **GET** HTTP method.
-5. Enter the `resource Location URL` you copied in [Check the GeoJSON data upload status](#check-the-geojson-data-upload-status). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key):
+5. Enter the `resource Location URL` you copied in [Check the GeoJSON data upload status](#check-the-geojson-data-upload-status). The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key):
```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
Each of the following sections makes API requests by using the five different lo
4. Select the **GET** HTTP method.
-5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
+5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.638237&lon=-122.1324831&searchBuffer=5&isAsync=True&mode=EnterAndExit
In the preceding GeoJSON response, the negative distance from the main site geof
4. Select the **GET** HTTP method.
-5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
+5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udId={udId}&lat=47.63800&lon=-122.132531&searchBuffer=5&isAsync=True&mode=EnterAndExit
In the preceding GeoJSON response, the equipment has remained in the main site g
4. Select the **GET** HTTP method.
-5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
+5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63810783315048&lon=-122.13336020708084&searchBuffer=5&isAsync=True&mode=EnterAndExit
In the preceding GeoJSON response, the equipment has remained in the main site g
4. Select the **GET** HTTP method.
-5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
+5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.637988&userTime=2023-01-16&lon=-122.1338344&searchBuffer=5&isAsync=True&mode=EnterAndExit
In the preceding GeoJSON response, the equipment has remained in the main site g
4. Select the **GET** HTTP method.
-5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
+5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)).
```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63799&lon=-122.134505&searchBuffer=5&isAsync=True&mode=EnterAndExit
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md
Follow these steps to upload the geofence by using the Azure Maps Data Upload AP
1. Open the Postman app, select **New** again. In the **Create New** window, select **HTTP Request**, and enter a request name for the request.
-2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API. Make sure to replace `{Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key.
+2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API. Make sure to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
Previously updated : 2/21/2023 Last updated : 2/22/2023 # Customer intent: As an IT manager, I want to understand how I should move from using legacy agents to Azure Monitor Agent.
In addition to consolidating and improving upon legacy Log Analytics agents, Azu
3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly.
-4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents** as applicable
- - If you have migrated to Azure Monitor agent for selected features/solutions and you need to continue using the legacy Log Analytics for others, you can selectively disable or "turn off" legacy agent collection by editing the Log Analytics workspace configurations directly
- - If you've migrated to Azure Monitor agent for all your requirements, you may [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent.
- - Don't uninstall the legacy agent if you need to use it for uploading data to System Center Operations Manager.
+4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents**
+ 1. If you have migrated to Azure Monitor agent for selected features/solutions and you need to continue using the legacy Log Analytics for others, you can
+ 2. If you've migrated to Azure Monitor agent for all your requirements, you may [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent.
+ 3. Don't uninstall the legacy agent if you need to use it for uploading data to System Center Operations Manager.
<sup>1</sup> The DCR generator only converts the configurations for Windows event logs, Linux syslog and performance counters. Support for additional features and solutions will be available soon
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md
# Release annotations for Application Insights
-Annotations show where you deployed a new build, or other significant events. Annotations make it easy to see whether your changes had any effect on your application's performance. They can be automatically created by the [Azure Pipelines](/azure/devops/pipelines/tasks/) build system. You can also create annotations to flag any event you like by creating them from PowerShell.
+Annotations show where you deployed a new build or other significant events. Annotations make it easy to see whether your changes had any effect on your application's performance. They can be automatically created by the [Azure Pipelines](/azure/devops/pipelines/tasks/) build system. You can also create annotations to flag any event you want by creating them from PowerShell.
## Release annotations with Azure Pipelines build
Release annotations are a feature of the cloud-based Azure Pipelines service of
If all the following criteria are met, the deployment task creates the release annotation automatically: -- The resource you're deploying to is linked to Application Insights (via the `APPINSIGHTS_INSTRUMENTATIONKEY` app setting).-- The Application Insights resource is in the same subscription as the resource you're deploying to.
+- The resource to which you're deploying is linked to Application Insights via the `APPINSIGHTS_INSTRUMENTATIONKEY` app setting.
+- The Application Insights resource is in the same subscription as the resource to which you're deploying.
- You're using one of the following Azure DevOps pipeline tasks: | Task code | Task name | Versions |
If all the following criteria are met, the deployment task creates the release a
| AzureWebApp | Azure Web App | Any | > [!NOTE]
-> If youΓÇÖre still using the Application Insights annotation deployment task, you should delete it.
+> If you're still using the Application Insights annotation deployment task, you should delete it.
### Configure release annotations
-If you can't use one the deployment tasks in the previous section, then you need to add an inline script task in your deployment pipeline.
+If you can't use one of the deployment tasks in the previous section, you need to add an inline script task in your deployment pipeline.
-1. Navigate to a new or existing pipeline and select a task.
- :::image type="content" source="./media/annotations/task.png" alt-text="Screenshot of task in stages selected." lightbox="./media/annotations/task.png":::
+1. Go to a new or existing pipeline and select a task.
+
+ :::image type="content" source="./media/annotations/task.png" alt-text="Screenshot that shows a task selected under Stages." lightbox="./media/annotations/task.png":::
1. Add a new task and select **Azure CLI**.
- :::image type="content" source="./media/annotations/add-azure-cli.png" alt-text="Screenshot of adding a new task and selecting Azure CLI." lightbox="./media/annotations/add-azure-cli.png":::
-1. Specify the relevant Azure subscription. Change the **Script Type** to *PowerShell* and **Script Location** to *Inline*.
-1. Add the [PowerShell script from step 2 in the next section](#create-release-annotations-with-azure-cli) to **Inline Script**.
-1. Add the arguments below, replacing the angle-bracketed placeholders with your values to **Script Arguments**. The -releaseProperties are optional.
+
+ :::image type="content" source="./media/annotations/add-azure-cli.png" alt-text="Screenshot that shows adding a new task and selecting Azure CLI." lightbox="./media/annotations/add-azure-cli.png":::
+1. Specify the relevant Azure subscription. Change **Script Type** to **PowerShell** and **Script Location** to **Inline**.
+1. Add the [PowerShell script from step 2 in the next section](#create-release-annotations-with-the-azure-cli) to **Inline Script**.
+1. Add the following arguments. Replace the angle-bracketed placeholders with your values to **Script Arguments**. The `-releaseProperties` are optional.
```powershell -aiResourceId "<aiResourceId>" `
If you can't use one the deployment tasks in the previous section, then you need
:::image type="content" source="./media/annotations/inline-script.png" alt-text="Screenshot of Azure CLI task settings with Script Type, Script Location, Inline Script, and Script Arguments highlighted." lightbox="./media/annotations/inline-script.png":::
- Below is an example of metadata you can set in the optional releaseProperties argument using [build](/azure/devops/pipelines/build/variables#build-variables-devops-services) and [release](/azure/devops/pipelines/release/variables#default-variablesrelease) variables.
-
+ The following example shows metadata you can set in the optional `releaseProperties` argument by using [build](/azure/devops/pipelines/build/variables#build-variables-devops-services) and [release](/azure/devops/pipelines/release/variables#default-variablesrelease) variables.
```powershell -releaseProperties @{
If you can't use one the deployment tasks in the previous section, then you need
"TeamFoundationCollectionUri"="$(System.TeamFoundationCollectionUri)" } ```
-1. Save.
+1. Select **Save**.
-## Create release annotations with Azure CLI
+## Create release annotations with the Azure CLI
-You can use the CreateReleaseAnnotation PowerShell script to create annotations from any process you like, without using Azure DevOps.
+You can use the `CreateReleaseAnnotation` PowerShell script to create annotations from any process you want without using Azure DevOps.
-1. Sign into [Azure CLI](/cli/azure/authenticate-azure-cli).
+1. Sign in to the [Azure CLI](/cli/azure/authenticate-azure-cli).
-2. Make a local copy of the script below and call it CreateReleaseAnnotation.ps1.
+1. Make a local copy of the following script and call it `CreateReleaseAnnotation.ps1`.
```powershell param(
You can use the CreateReleaseAnnotation PowerShell script to create annotations
# Invoke-AzRestMethod -Path "$aiResourceId/Annotations?api-version=2015-05-01" -Method PUT -Payload $body ```
- [!NOTE]
- Your annotations must have **Category** set to **Deployment** in order to be displayed in the Azure Portal.
+ > [!NOTE]
+ > Your annotations must have **Category** set to **Deployment** to appear in the Azure portal.
-3. Call the PowerShell script with the following code, replacing the angle-bracketed placeholders with your values. The -releaseProperties are optional.
+1. Call the PowerShell script with the following code. Replace the angle-bracketed placeholders with your values. The `-releaseProperties` are optional.
```powershell .\CreateReleaseAnnotation.ps1 `
You can use the CreateReleaseAnnotation PowerShell script to create annotations
"TriggerBy"="<Your name>" } ```
-|Argument | Definition | Note|
-|--|--|--|
-|aiResourceId | The Resource ID to the target Application Insights resource. | Example:<br> /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyRGName/providers/microsoft.insights/components/MyResourceName|
-|releaseName | The name to give the created release annotation. | |
-|releaseProperties | Used to attach custom metadata to the annotation. | Optional|
--
+ |Argument | Definition | Note|
+ |--|--|--|
+ |`aiResourceId` | The resource ID to the target Application Insights resource. | Example:<br> /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyRGName/providers/microsoft.insights/components/MyResourceName|
+ |`releaseName` | The name to give the created release annotation. | |
+ |`releaseProperties` | Used to attach custom metadata to the annotation. | Optional|
+
## View annotations > [!NOTE]
-> Release annotations are not currently available in the Metrics pane of Application Insights
+> Release annotations aren't currently available in the **Metrics** pane of Application Insights.
-Now, whenever you use the release template to deploy a new release, an annotation is sent to Application Insights. The annotations can be viewed in the following locations:
+Whenever you use the release template to deploy a new release, an annotation is sent to Application Insights. You can view annotations in the following locations:
-- Performance
+- **Performance:**
- :::image type="content" source="./media/annotations/performance.png" alt-text="Screenshot of the Performance tab with a release annotation selected(blue arrow) to show the Release Properties tab." lightbox="./media/annotations/performance.png":::
+ :::image type="content" source="./media/annotations/performance.png" alt-text="Screenshot that shows the Performance tab with a release annotation selected to show the Release Properties tab." lightbox="./media/annotations/performance.png":::
-- Failures
+- **Failures:**
- :::image type="content" source="./media/annotations/failures.png" alt-text="Screenshot of the Failures tab with a release annotation (blue arrow) selected to show the Release Properties tab." lightbox="./media/annotations/failures.png":::
-- Usage
+ :::image type="content" source="./media/annotations/failures.png" alt-text="Screenshot that shows the Failures tab with a release annotation selected to show the Release Properties tab." lightbox="./media/annotations/failures.png":::
+- **Usage:**
- :::image type="content" source="./media/annotations/usage-pane.png" alt-text="Screenshot of the Users tab bar with release annotations selected. Release annotations appear as blue arrows above the chart indicating the moment in time that a release occurred." lightbox="./media/annotations/usage-pane.png":::
+ :::image type="content" source="./media/annotations/usage-pane.png" alt-text="Screenshot that shows the Users tab bar with release annotations selected. Release annotations appear as blue arrows above the chart indicating the moment in time that a release occurred." lightbox="./media/annotations/usage-pane.png":::
-- Workbooks
+- **Workbooks:**
- In any log-based workbook query where the visualization displays time along the x-axis.
+ In any log-based workbook query where the visualization displays time along the x-axis:
- :::image type="content" source="./media/annotations/workbooks-annotations.png" alt-text="Screenshot of workbooks pane with time series log-based query with annotations displayed." lightbox="./media/annotations/workbooks-annotations.png":::
+ :::image type="content" source="./media/annotations/workbooks-annotations.png" alt-text="Screenshot that shows the Workbooks pane with a time series log-based query with annotations displayed." lightbox="./media/annotations/workbooks-annotations.png":::
- To enable annotations in your workbook, go to **Advanced Settings** and select **Show annotations**.
+To enable annotations in your workbook, go to **Advanced Settings** and select **Show annotations**.
- :::image type="content" source="./media/annotations/workbook-show-annotations.png" alt-text="Screenshot of Advanced Settings menu with the show annotations checkbox highlighted.":::
Select any annotation marker to open details about the release, including requestor, source control branch, release pipeline, and environment.
-## Release annotations using API keys
+## Release annotations by using API keys
Release annotations are a feature of the cloud-based Azure Pipelines service of Azure DevOps. > [!IMPORTANT]
-> Annotations using API keys is deprecated. We recommend using [Azure CLI](#create-release-annotations-with-azure-cli) instead.
+> Annotations using API keys is deprecated. We recommend using the [Azure CLI](#create-release-annotations-with-the-azure-cli) instead.
### Install the annotations extension (one time)
-To be able to create release annotations, you'll need to install one of the many Azure DevOps extensions available in the Visual Studio Marketplace.
+To create release annotations, install one of the many Azure DevOps extensions available in Visual Studio Marketplace.
1. Sign in to your [Azure DevOps](https://azure.microsoft.com/services/devops/) project.
-
-1. On the Visual Studio Marketplace [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization, and then select **Install** to add the extension to your Azure DevOps organization.
-
- ![Select an Azure DevOps organization and then select Install.](./media/annotations/1-install.png)
-
+
+1. On the **Visual Studio Marketplace** [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization. Select **Install** to add the extension to your Azure DevOps organization.
+
+ ![Screenshot that shows selecting an Azure DevOps organization and selecting Install.](./media/annotations/1-install.png)
+ You only need to install the extension once for your Azure DevOps organization. You can now configure release annotations for any project in your organization.
-### Configure release annotations using API keys
+### Configure release annotations by using API keys
Create a separate API key for each of your Azure Pipelines release templates. 1. Sign in to the [Azure portal](https://portal.azure.com) and open the Application Insights resource that monitors your application. Or if you don't have one, [create a new Application Insights resource](create-workspace-resource.md).
-
+ 1. Open the **API Access** tab and copy the **Application Insights ID**.
-
- ![Under API Access, copy the Application ID.](./media/annotations/2-app-id.png)
+
+ ![Screenshot that shows under API Access, copying the Application ID.](./media/annotations/2-app-id.png)
1. In a separate browser window, open or create the release template that manages your Azure Pipelines deployments.+
+1. Select **Add task** and then select the **Application Insights Release Annotation** task from the menu.
-1. Select **Add task**, and then select the **Application Insights Release Annotation** task from the menu.
-
- ![Select Add Task and select Application Insights Release Annotation.](./media/annotations/3-add-task.png)
+ ![Screenshot that shows selecting Add Task and Application Insights Release Annotation.](./media/annotations/3-add-task.png)
> [!NOTE]
- > The Release Annotation task currently supports only Windows-based agents; it won't run on Linux, macOS, or other types of agents.
-
+ > The Release Annotation task currently supports only Windows-based agents. It won't run on Linux, macOS, or other types of agents.
+ 1. Under **Application ID**, paste the Application Insights ID you copied from the **API Access** tab.
-
- ![Paste the Application Insights ID](./media/annotations/4-paste-app-id.png)
-
-1. Back in the Application Insights **API Access** window, select **Create API Key**.
-
- ![In the API Access tab, select Create API Key.](./media/annotations/5-create-api-key.png)
-
-1. In the **Create API key** window, type a description, select **Write annotations**, and then select **Generate key**. Copy the new key.
-
- ![In the Create API key window, type a description, select Write annotations, and then select Generate key.](./media/annotations/6-create-api-key.png)
-
+
+ ![Screenshot that shows pasting the Application Insights ID.](./media/annotations/4-paste-app-id.png)
+
+1. Back in the Application Insights **API Access** window, select **Create API Key**.
+
+ ![Screenshot that shows selecting the Create API Key on the API Access tab.](./media/annotations/5-create-api-key.png)
+
+1. In the **Create API key** window, enter a description, select **Write annotations**, and then select **Generate key**. Copy the new key.
+
+ ![Screenshot that shows in the Create API key window, entering a description, selecting Write annotations, and then selecting the Generate key.](./media/annotations/6-create-api-key.png)
+ 1. In the release template window, on the **Variables** tab, select **Add** to create a variable definition for the new API key.
-1. Under **Name**, enter `ApiKey`, and under **Value**, paste the API key you copied from the **API Access** tab.
-
- ![In the Azure DevOps Variables tab, select Add, name the variable ApiKey, and paste the API key under Value.](./media/annotations/7-paste-api-key.png)
-
-1. Select **Save** in the main release template window to save the template.
+1. Under **Name**, enter **ApiKey**. Under **Value**, paste the API key you copied from the **API Access** tab.
+ ![Screenshot that shows in the Azure DevOps Variables tab, selecting Add, naming the variable ApiKey, and pasting the API key under Value.](./media/annotations/7-paste-api-key.png)
+
+1. Select **Save** in the main release template window to save the template.
> [!NOTE] > Limits for API keys are described in the [REST API rate limits documentation](/rest/api/yammer/rest-api-rate-limits). ### Transition to the new release annotation
-To use the new release annotations:
+To use the new release annotations:
1. [Remove the Release Annotations extension](/azure/devops/marketplace/uninstall-disable-extensions).
-1. Remove the Application Insights Release Annotation task in your Azure Pipelines deployment.
-1. Create new release annotations with [Azure Pipelines](#release-annotations-with-azure-pipelines-build) or [Azure CLI](#create-release-annotations-with-azure-cli).
+1. Remove the Application Insights Release Annotation task in your Azure Pipelines deployment.
+1. Create new release annotations with [Azure Pipelines](#release-annotations-with-azure-pipelines-build) or the [Azure CLI](#create-release-annotations-with-the-azure-cli).
## Next steps
azure-monitor Azure Web Apps Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md
To manually update, follow these steps:
2. Disable Application Insights via the Application Insights tab in the Azure portal.
-3. Once the agent jar file is uploaded, go to App Service configurations and add a new environment variable, `JAVA_OPTS`, with the value `-javaagent:{PATH_TO_THE_AGENT_JAR}/applicationinsights-agent-{VERSION_NUMBER}.jar`.
+3. Once the agent jar file is uploaded, go to App Service configurations. If you
+ need to use **Startup Command** for Linux, please include jvm arguments:
-4. Restart the app, leaving the **Startup Command** field blank, to apply the changes.
+ :::image type="content"source="./media/azure-web-apps/startup-command.png" alt-text="Screenshot of startup command.":::
+
+ **Startup Command** won't honor `JAVA_OPTS`.
+
+ If you don't use **Startup Command**, create a new environment variable, `JAVA_OPTS`, with the value
+ `-javaagent:{PATH_TO_THE_AGENT_JAR}/applicationinsights-agent-{VERSION_NUMBER}.jar`.
+
+4. Restart the app to apply the changes.
> [!NOTE] > If you set the JAVA_OPTS environment variable, you will have to disable Application Insights in the portal. Alternatively, if you prefer to enable Application Insights from the portal, make sure that you don't set the `JAVA_OPTS` variable in App Service configurations settings.
azure-monitor Data Model Event Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-event-telemetry.md
Title: Azure Application Insights Telemetry Data Model - Event Telemetry | Microsoft Docs
-description: Application Insights data model for event telemetry
+ Title: Application Insights telemetry data model - Event telemetry | Microsoft Docs
+description: Learn about the Application Insights data model for event telemetry.
Last updated 04/25/2017
# Event telemetry: Application Insights data model
-You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically it is a user interaction such as button click or order checkout. It can also be an application life cycle event like initialization or configuration update.
+You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically, it's a user interaction such as a button click or order checkout. It can also be an application lifecycle event like initialization or a configuration update.
-Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be a subject to separate, less aggressive [sampling](./api-filtering-sampling.md).
+Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be subject to separate, less aggressive [sampling](./api-filtering-sampling.md).
## Name
-Event name. To allow proper grouping and useful metrics, restrict your application so that it generates a small number of separate event names. For example, don't use a separate name for each generated instance of an event.
+Event name: To allow proper grouping and useful metrics, restrict your application so that it generates a few separate event names. For example, don't use a separate name for each generated instance of an event.
-Max length: 512 characters
+**Maximum length:** 512 characters
## Custom properties
Max length: 512 characters
## Next steps -- See [data model](data-model.md) for Application Insights types and data model.-- [Write custom event telemetry](./api-custom-events-metrics.md#trackevent)-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.-
+- See [Data model](data-model.md) for Application Insights types and data models.
+- [Write custom event telemetry](./api-custom-events-metrics.md#trackevent).
+- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 02/14/2023 Last updated : 02/22/2023 ms.devlang: java
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.9.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.10.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.9.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.10.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.9.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.10.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.9.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.10.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.9.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.10.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.9.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.10.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.9</version>
+ <version>3.4.10</version>
</dependency> ```
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Title: Add the JVM arg - Application Insights for Java description: Learn how to add the JVM arg that enables Application Insights for Java. Previously updated : 02/14/2023 Last updated : 02/22/2023 ms.devlang: java
If you're using a third-party container image that you can't modify, mount the A
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.9.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.10.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.9.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.9.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.10.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.9.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.10.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.9.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.10.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to `CATALINA_OPTS`.
### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.9.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.10.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.9.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.10.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.9.jar
+-javaagent:path/to/applicationinsights-agent-3.4.10.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.9.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.10.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jv
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.9.jar
+ -javaagent:path/to/applicationinsights-agent-3.4.10.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jv
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.9.jar
+-javaagent:path/to/applicationinsights-agent-3.4.10.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 02/14/2023 Last updated : 02/22/2023 ms.devlang: java
You'll find more information and configuration options in the following sections
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.9.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.10.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.9.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.10.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.9.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.10.jar` is located.
```json {
and add `applicationinsights-core` to your application:
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.9</version>
+ <version>3.4.10</version>
</dependency> ```
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.9.jar` is located.
+`applicationinsights-agent-3.4.10.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md
The ApplicationInsights Java Agent monitors CPU and memory consumption and if it
Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button will immediately request a profile in all agents that are attached to the Application Insights instance.
+> [!WARNING]
+> Invoking Profile now will enable the profiler feature, and Application Insights will apply default CPU and memory SLA triggers. When your application breaches those SLAs, Application Insights will gather Java profiles. If you wish to disable profiling later on, you can do so within the trigger menu shown in [Installation](#installation).
+ #### CPU CPU threshold is a percentage of the usage of all available cores on the system.
The following steps will guide you through enabling the profiling component on t
3. Configure the required CPU and Memory thresholds and select Apply. :::image type="content" source="./media/java-standalone-profiler/cpu-memory-trigger-settings.png" alt-text="Screenshot of trigger settings pane for CPU and Memory triggers.":::
-
-1. Inside the `applicationinsights.json` configuration of your process, enable profiler with the `preview.profiler.enabled` setting:
- ```json
- {
- "connectionString" : "...",
- "preview" : {
- "profiler" : {
- "enabled" : true
- }
- }
- }
- ```
- Alternatively, set the `APPLICATIONINSIGHTS_PREVIEW_PROFILER_ENABLED` environment variable to true.
-
-1. Restart your process with the updated configuration.
> [!WARNING] > The Java profiler does not support the "Sampling" trigger. Configuring this will have no effect.
Profiles can be generated/edited in the JDK Mission Control (JMC) user interface
### Environment variables -- `APPLICATIONINSIGHTS_PREVIEW_PROFILER_ENABLED`: boolean (default: `false`)
- Enables/disables the profiling feature.
+- `APPLICATIONINSIGHTS_PREVIEW_PROFILER_ENABLED`: boolean (default: `true`)
+ Enables/disables the profiling feature. By default the feature is enabled within the agent (since agent 3.4.9). However, even though this feature is enabled within the agent, profiles will not be gathered unless enabled within the Portal as described in [Installation](#installation).
### Configuration file
Azure Monitor Application Insights Java profiler uses Java Flight Recorder (JFR)
Java Flight Recorder is a tool for collecting profiling data of a running Java application. It's integrated into the Java Virtual Machine (JVM) and is used for troubleshooting performance issues. Learn more about [Java SE JFR Runtime](https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-guide/about.htm#JFRUH170). ### What is the price and/or licensing fee implications for enabling App Insights Java Profiling?
-Java Profiling enablement is a free feature with Application Insights. [Azure Monitor Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/) is based on ingestion cost.
+Java Profiling is a free feature with Application Insights. [Azure Monitor Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/) is based on ingestion cost.
### Which Java profiling information is collected? Profiling data collected by the JFR includes: method and execution profiling data, garbage collection data, and lock profiles.
Review the [Pre-requisites](#prerequisites) at the top of this article.
### Can I use Java Profiling for microservices application?
-Yes, you can profile a JVM running microservices using the JFR.
+Yes, you can profile a JVM running microservices using the JFR.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 01/18/2023 Last updated : 02/22/2023 ms.devlang: java
auto-instrumentation which is provided by the 3.x Java agent.
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.9.jar
+-javaagent:path/to/applicationinsights-agent-3.4.10.jar
``` If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 01/10/2023 Last updated : 02/22/2023 ms.devlang: csharp, javascript, typescript, python
dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter -s https://
#### [Java](#tab/java)
-Download the [applicationinsights-agent-3.4.8.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.8/applicationinsights-agent-3.4.8.jar) file.
+Download the [applicationinsights-agent-3.4.10.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.10/applicationinsights-agent-3.4.10.jar) file.
> [!WARNING] >
public class Program
Java auto-instrumentation is enabled through configuration changes; no code changes are required.
-Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.8.jar"` to your application's JVM args.
+Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.10.jar"` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
Use one of the following two ways to point the jar file to your Application Insi
APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> ``` -- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.8.jar` with the following content:
+- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.10.jar` with the following content:
```json {
This is not available in .NET.
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.8</version>
+ <version>3.4.10</version>
</dependency> ```
azure-monitor Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sharepoint.md
Title: Monitor a SharePoint site with Application Insights
-description: Start monitoring a new application with a new instrumentation key
+description: Start monitoring a new application with a new instrumentation key.
Last updated 09/08/2020
# Monitor a SharePoint site with Application Insights
-Azure Application Insights monitors the availability, performance and usage of your apps. Here you'll learn how to set it up for a SharePoint site.
+Application Insights monitors the availability, performance, and usage of your apps. This article shows you how to set it up for a SharePoint site.
> [!NOTE]
-> Due to security concerns, you can't directly add the script that's described in this article to your webpages in the SharePoint modern UX. As an alternative, you can use [SharePoint Framework (SPFx)](/sharepoint/dev/spfx/extensions/overview-extensions) to build a custom extension that you can use to install Application Insights on your SharePoint sites.
+> Because of security concerns, you can't directly add the script that's described in this article to your webpages in the SharePoint modern UX. As an alternative, you can use [SharePoint Framework (SPFx)](/sharepoint/dev/spfx/extensions/overview-extensions) to build a custom extension that you can use to install Application Insights on your SharePoint sites.
## Create an Application Insights resource
-In the [Azure portal](https://portal.azure.com), create a new Application Insights resource. Choose ASP.NET as the application type.
+In the [Azure portal](https://portal.azure.com), create a new Application Insights resource. For **Application Type**, select **ASP.NET**.
-![Click Properties, select the key, and press ctrl+C](./media/sharepoint/001.png)
+![Screenshot that shows selecting Properties, selecting the key, and selecting Ctrl+C.](./media/sharepoint/001.png)
-The window that opens is the place where you'll see performance and usage data about your app. To get back to it next time you sign in to Azure, you should find a tile for it on the start screen. Alternatively select Browse to find it.
+The window that opens is the place where you see performance and usage data about your app. The next time you sign in to Azure, a tile for it appears on the **Start** screen. Alternatively, select **Browse** to find it.
-## Add the script to your web pages
+## Add the script to your webpages
-The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
+The following current snippet is version `"5"`. The version is encoded in the snippet as `sv:"#"`. The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
```HTML <!--
cfg: { // Application Insights Configuration
``` > [!NOTE]
-> The Url for SharePoint uses a different module format "...\ai.2.gbl.min.js" (note the additional **.gbl.**) this alternate module format is required to avoid an issue caused by the order that scripts are loaded, which will cause the SDK to fail to initialize and will result in the loss of telemetry events.
+> The URL for SharePoint uses a different module format `"...\ai.2.gbl.min.js"` (note the extra `.gbl`.). This alternate module format is required to avoid an issue caused by the order in which scripts are loaded. The issue causes the SDK to fail to initialize and results in the loss of telemetry events.
>
-> The issue is caused by requireJS being loaded and initialized before the SDK.
+> The issue is caused by `requireJS` being loaded and initialized before the SDK.
-Insert the script just before the &lt;/head&gt; tag of every page you want to track. If your website has a master page, you can put the script there. For example, in an ASP.NET MVC project, you'd put it in View\Shared\_Layout.cshtml
+Insert the script before the &lt;/head&gt; tag of every page you want to track. If your website has a main page, you can put the script there. For example, in an ASP.NET MVC project, you'd put it in `View\Shared\_Layout.cshtml`.
The script contains the instrumentation key that directs the telemetry to your Application Insights resource. ### Add the code to your site pages
-#### On the master page
-If you can edit the site's master page, that will provide monitoring for every page in the site.
-Check out the master page and edit it using SharePoint Designer or any other editor.
+You can add the code to your main page or individual pages.
-![Screenshot that shows how to edit the master page using Sharepoing Designer or another editor.](./media/sharepoint/03-master.png)
+#### Main page
+If you can edit the site's main page, you can provide monitoring for every page in the site.
-Add the code just before the </head> tag.
+Check out the main page and edit it by using SharePoint Designer or any other editor.
+
+![Screenshot that shows how to edit the main page by using Sharepoint Designer or another editor.](./media/sharepoint/03-master.png)
+
+Add the code before the </head> tag.
![Screenshot that shows where to add the code to your site page.](./media/sharepoint/04-code.png) [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-#### Or on individual pages
-To monitor a limited set of pages, add the script separately to each page.
+#### Individual pages
+To monitor a limited set of pages, add the script separately to each page.
Insert a web part and embed the code snippet in it.
Redeploy your app.
Return to your application pane in the [Azure portal](https://portal.azure.com).
-The first events appear in Search.
+The first events appear in **Search**.
![Screenshot that shows the new data that you can view in the app.](./media/sharepoint/09-search.png)
-Select Refresh after a few seconds if you're expecting more data.
+Select **Refresh** after a few seconds if you're expecting more data.
-## Capturing User Id
-The standard web page code snippet doesn't capture the user ID from SharePoint, but you can do that with a small modification.
+## Capture the user ID
+The standard webpage code snippet doesn't capture the user ID from SharePoint, but you can do that with a small modification.
-1. Copy your app's instrumentation key from the Essentials drop-down in Application Insights.
+1. Copy your app's instrumentation key from the **Essentials** dropdown in Application Insights.
![Screenshot that shows copying the app's instrumentation from the Essentials dropdown in Application Insights.](./media/sharepoint/02-props.png)
-1. Substitute the instrumentation key for 'XXXX' in the snippet below.
-2. Embed the script in your SharePoint app instead of the snippet you get from the portal.
-
-```
--
-<SharePoint:ScriptLink ID="ScriptLink1" name="SP.js" runat="server" localizable="false" loadafterui="true" />
-<SharePoint:ScriptLink ID="ScriptLink2" name="SP.UserProfiles.js" runat="server" localizable="false" loadafterui="true" />
-
-<script type="text/javascript">
-var personProperties;
-
-// Ensure that the SP.UserProfiles.js file is loaded before the custom code runs.
-SP.SOD.executeOrDelayUntilScriptLoaded(getUserProperties, 'SP.UserProfiles.js');
-
-function getUserProperties() {
- // Get the current client context and PeopleManager instance.
- var clientContext = new SP.ClientContext.get_current();
- var peopleManager = new SP.UserProfiles.PeopleManager(clientContext);
-
- // Get user properties for the target user.
- // To get the PersonProperties object for the current user, use the
- // getMyProperties method.
-
- personProperties = peopleManager.getMyProperties();
-
- // Load the PersonProperties object and send the request.
- clientContext.load(personProperties);
- clientContext.executeQueryAsync(onRequestSuccess, onRequestFail);
-}
-
-// This function runs if the executeQueryAsync call succeeds.
-function onRequestSuccess() {
-var appInsights=window.appInsights||function(config){
-function s(config){t[config]=function(){var i=arguments;t.queue.push(function(){t[config].apply(t,i)})}}var t={config:config},r=document,f=window,e="script",o=r.createElement(e),i,u;for(o.src=config.url||"//az416426.vo.msecnd.net/scripts/a/ai.0.js",r.getElementsByTagName(e)[0].parentNode.appendChild(o),t.cookie=r.cookie,t.queue=[],i=["Event","Exception","Metric","PageView","Trace"];i.length;)s("track"+i.pop());return config.disableExceptionTracking||(i="onerror",s("_"+i),u=f[i],f[i]=function(config,r,f,e,o){var s=u&&u(config,r,f,e,o);return s!==!0&&t["_"+i](config,r,f,e,o),s}),t
- }({
- instrumentationKey:"XXXX"
- });
- window.appInsights=appInsights;
- appInsights.trackPageView(document.title,window.location.href, {User: personProperties.get_displayName()});
-}
-
-// This function runs if the executeQueryAsync call fails.
-function onRequestFail(sender, args) {
-}
-</script>
--
-```
---
-## Next Steps
-* [Availability overview](./availability-overview.md) to monitor the availability of your site.
-* [Application Insights](./app-insights-overview.md) for other types of app.
+1. Substitute the instrumentation key for `XXXX` in the following snippet.
+1. Embed the script in your SharePoint app instead of the snippet you get from the portal.
+
+ ```
+
+
+ <SharePoint:ScriptLink ID="ScriptLink1" name="SP.js" runat="server" localizable="false" loadafterui="true" />
+ <SharePoint:ScriptLink ID="ScriptLink2" name="SP.UserProfiles.js" runat="server" localizable="false" loadafterui="true" />
+
+ <script type="text/javascript">
+ var personProperties;
+
+ // Ensure that the SP.UserProfiles.js file is loaded before the custom code runs.
+ SP.SOD.executeOrDelayUntilScriptLoaded(getUserProperties, 'SP.UserProfiles.js');
+
+ function getUserProperties() {
+ // Get the current client context and PeopleManager instance.
+ var clientContext = new SP.ClientContext.get_current();
+ var peopleManager = new SP.UserProfiles.PeopleManager(clientContext);
+
+ // Get user properties for the target user.
+ // To get the PersonProperties object for the current user, use the
+ // getMyProperties method.
+
+ personProperties = peopleManager.getMyProperties();
+
+ // Load the PersonProperties object and send the request.
+ clientContext.load(personProperties);
+ clientContext.executeQueryAsync(onRequestSuccess, onRequestFail);
+ }
+
+ // This function runs if the executeQueryAsync call succeeds.
+ function onRequestSuccess() {
+ var appInsights=window.appInsights||function(config){
+ function s(config){t[config]=function(){var i=arguments;t.queue.push(function(){t[config].apply(t,i)})}}var t={config:config},r=document,f=window,e="script",o=r.createElement(e),i,u;for(o.src=config.url||"//az416426.vo.msecnd.net/scripts/a/ai.0.js",r.getElementsByTagName(e)[0].parentNode.appendChild(o),t.cookie=r.cookie,t.queue=[],i=["Event","Exception","Metric","PageView","Trace"];i.length;)s("track"+i.pop());return config.disableExceptionTracking||(i="onerror",s("_"+i),u=f[i],f[i]=function(config,r,f,e,o){var s=u&&u(config,r,f,e,o);return s!==!0&&t["_"+i](config,r,f,e,o),s}),t
+ }({
+ instrumentationKey:"XXXX"
+ });
+ window.appInsights=appInsights;
+ appInsights.trackPageView(document.title,window.location.href, {User: personProperties.get_displayName()});
+ }
+
+ // This function runs if the executeQueryAsync call fails.
+ function onRequestFail(sender, args) {
+ }
+ </script>
+
+
+ ```
+
+## Next steps
+* See the [Availability overview](./availability-overview.md) to monitor the availability of your site.
+* See [Application Insights](./app-insights-overview.md) for other types of apps.
<!--Link references-->
azure-monitor Source Map Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/source-map-support.md
Title: Source map support for JavaScript applications - Azure Monitor Application Insights
-description: Learn how to upload source maps to your own storage account Blob container using Application Insights.
+description: Learn how to upload source maps to your Azure Storage account blob container by using Application Insights.
Last updated 06/23/2020
# Source map support for JavaScript applications
-Application Insights supports the uploading of source maps to your own Storage Account Blob Container.
-Source maps can be used to unminify call stacks found on the end to end transaction details page. Any exception sent by the [JavaScript SDK][ApplicationInsights-JS] or the [Node.js SDK][ApplicationInsights-Node.js] can be unminified with source maps.
+Application Insights supports the uploading of source maps to your Azure Storage account blob container. You can use source maps to unminify call stacks found on the **End-to-end transaction details** page. You can also use source maps to unminify any exception sent by the [JavaScript SDK][ApplicationInsights-JS] or the [Node.js SDK][ApplicationInsights-Node.js].
-![Unminify a Call Stack by linking with a Storage Account](./media/source-map-support/details-unminify.gif)
+![Screenshot that shows selecting the option to unminify a call stack by linking with a storage account.](./media/source-map-support/details-unminify.gif)
-## Create a new storage account and Blob container
+## Create a new storage account and blob container
If you already have an existing storage account or blob container, you can skip this step.
-1. [Create a new storage account][create storage account]
-2. [Create a blob container][create blob container] inside your storage account. Be sure to set the "Public access level" to `Private`, to ensure that your source maps are not publicly accessible.
+1. [Create a new storage account][create storage account].
+1. [Create a blob container][create blob container] inside your storage account. Set **Public access level** to **Private** to ensure that your source maps aren't publicly accessible.
-> [!div class="mx-imgBorder"]
->![Your container access level must be set to Private](./media/source-map-support/container-access-level.png)
+ > [!div class="mx-imgBorder"]
+ >![Screenshot that shows setting the container access level to Private.](./media/source-map-support/container-access-level.png)
-## Push your source maps to your Blob container
+## Push your source maps to your blob container
-You should integrate your continuous deployment pipeline with your storage account by configuring it to automatically upload your source maps to the configured Blob container.
+Integrate your continuous deployment pipeline with your storage account by configuring it to automatically upload your source maps to the configured blob container.
-Source maps can be uploaded to your Blob Storage Container with the same folder structure they were compiled & deployed with. A common use case is to prefix a deployment folder with its version, e.g. `1.2.3/static/js/main.js`. When unminifying via an Azure Blob container called `sourcemaps`, it will try to fetch a source map located at `sourcemaps/1.2.3/static/js/main.js.map`.
+You can upload source maps to your Azure Blob Storage container with the same folder structure they were compiled and deployed with. A common use case is to prefix a deployment folder with its version, for example, `1.2.3/static/js/main.js`. When you unminify via an Azure blob container called `sourcemaps`, the pipeline tries to fetch a source map located at `sourcemaps/1.2.3/static/js/main.js.map`.
### Upload source maps via Azure Pipelines (recommended)
-If you are using Azure Pipelines to continuously build and deploy your application, add an [Azure File Copy][azure file copy] task to your pipeline to automatically upload your source maps.
+If you're using Azure Pipelines to continuously build and deploy your application, add an [Azure file copy][azure file copy] task to your pipeline to automatically upload your source maps.
> [!div class="mx-imgBorder"]
-> ![Add an Azure File Copy task to your Pipeline to upload your source maps to Azure Blob Storage](./media/source-map-support/azure-file-copy.png)
+> ![Screenshot that shows adding an Azure file copy task to your pipeline to upload your source maps to Azure Blob Storage.](./media/source-map-support/azure-file-copy.png)
+
+## Configure your Application Insights resource with a source map storage account
-## Configure your Application Insights resource with a Source Map storage account
+You have two options for configuring your Application Insights resource with a source map storage account.
-### From the end-to-end transaction details page
+### End-to-end transaction details tab
-From the end-to-end transaction details tab, you can click on *Unminify* and it will display a prompt to configure if your resource is unconfigured.
+From the **End-to-end transaction details** tab, select **Unminify**. Configure your resource if it's unconfigured.
-1. In the Portal, view the details of an exception that is minified.
-2. Select *Unminify*.
-3. If your resource has not been configured, a message will appear, prompting you to configure.
+1. In the Azure portal, view the details of an exception that's minified.
+1. Select **Unminify**.
+1. If your resource isn't configured, configure it.
-### From the properties page
+### Properties tab
-If you would like to configure or change the storage account or Blob container that is linked to your Application Insights Resource, you can do it by viewing the Application Insights resource's *Properties* tab.
+To configure or change the storage account or blob container that's linked to your Application Insights resource:
-1. Navigate to the *Properties* tab of your Application Insights resource.
-2. Select *Change source map blob container*.
-3. Select a different Blob container as your source maps container.
-4. Select `Apply`.
+1. Go to the **Properties** tab of your Application Insights resource.
+1. Select **Change source map Blob Container**.
+1. Select a different blob container as your source map container.
+1. Select **Apply**.
> [!div class="mx-imgBorder"]
-> ![Reconfigure your selected Azure Blob Container by navigating to the Properties pane](./media/source-map-support/reconfigure.png)
+> ![Screenshot that shows reconfiguring your selected Azure blob container on the Properties pane.](./media/source-map-support/reconfigure.png)
## Troubleshooting
-### Required Azure role-based access control (Azure RBAC) settings on your Blob container
+This section offers troubleshooting tips for common issues.
-Any user on the Portal using this feature must be at least assigned as a [Storage Blob Data Reader][storage blob data reader] to your Blob container. You must assign this role to anyone else that will be using the source maps through this feature.
+### Required Azure role-based access control settings on your blob container
+
+Any user on the portal who uses this feature must be assigned at least as a [Storage Blob Data Reader][storage blob data reader] to your blob container. Assign this role to anyone who might use the source maps through this feature.
> [!NOTE]
-> Depending on how the container was created, this may not have been automatically assigned to you or your team.
+> Depending on how the container was created, this role might not have been automatically assigned to you or your team.
### Source map not found
-1. Verify that the corresponding source map is uploaded to the correct blob container
-2. Verify that the source map file is named after the JavaScript file it maps to, suffixed with `.map`.
- - For example, `/static/js/main.4e2ca5fa.chunk.js` will search for the blob named `main.4e2ca5fa.chunk.js.map`
-3. Check your browser's console to see if any errors are being logged. Include this in any support ticket.
-
-## Next Steps
+1. Verify that the corresponding source map is uploaded to the correct blob container.
+1. Verify that the source map file is named after the JavaScript file it maps to and uses the suffix `.map`.
+
+ For example, `/static/js/main.4e2ca5fa.chunk.js` searches for the blob named `main.4e2ca5fa.chunk.js.map`.
+1. Check your browser's console to see if any errors were logged. Include this information in any support ticket.
-* [Azure File Copy task](/azure/devops/pipelines/tasks/deploy/azure-file-copy)
+## Next steps
+[Azure file copy task](/azure/devops/pipelines/tasks/deploy/azure-file-copy)
<!-- Remote URLs --> [create storage account]: ../../storage/common/storage-account-create.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal
azure-monitor Usage Cohorts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-cohorts.md
Title: Application Insights usage cohorts | Microsoft Docs
-description: Analyze different sets or users, sessions, events, or operations that have something in common
+description: Analyze different sets or users, sessions, events, or operations that have something in common.
Last updated 07/30/2021 # Application Insights cohorts
-A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set youΓÇÖre interested in.
+A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set you're interested in.
-## Cohorts versus basic filters
+## Cohorts vs. basic filters
-Cohorts are used in ways similar to filters. But cohorts' definitions are built from custom analytics queries, so they're much more adaptable and complex. Unlike filters, you can save cohorts so other members of your team can reuse them.
+You can use cohorts in ways similar to filters. But cohorts' definitions are built from custom analytics queries, so they're much more adaptable and complex. Unlike filters, you can save cohorts so that other members of your team can reuse them.
You might define a cohort of users who have all tried a new feature in your app. You can save this cohort in your Application Insights resource. It's easy to analyze this saved group of specific users in the future.- > [!NOTE]
-> After they're created, cohorts are available from the Users, Sessions, Events, and User Flows tools.
+> After cohorts are created, they're available from the Users, Sessions, Events, and User Flows tools.
## Example: Engaged users Your team defines an engaged user as anyone who uses your app five or more times in a given month. In this section, you define a cohort of these engaged users.
-1. Select **Create a Cohort**
-
-2. Select the **Template Gallery** tab. You see a collection of templates for various cohorts.
-
-3. Select **Engaged Users -- by Days Used**.
+1. Select **Create a Cohort**.
+1. Select the **Template Gallery** tab to see a collection of templates for various cohorts.
+1. Select **Engaged Users -- by Days Used**.
There are three parameters for this cohort:
- * **Activities**, where you choose which events and page views count as ΓÇ£usage.ΓÇ¥
- * **Period**, the definition of a month.
- * **UsedAtLeastCustom**, the number of times users need to use something within a period to count as engaged.
+ * **Activities**: Where you choose which events and page views count as usage.
+ * **Period**: The definition of a month.
+ * **UsedAtLeastCustom**: The number of times users need to use something within a period to count as engaged.
-4. Change **UsedAtLeastCustom** to **5+ days**, and leave **Period** on the default of 28 days.
+1. Change **UsedAtLeastCustom** to **5+ days**. Leave **Period** set as the default of 28 days.
-
- Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28.
+ Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28 days.
-5. Select **Save**.
+1. Select **Save**.
> [!TIP]
- > Give your cohort a name, like ΓÇ£Engaged Users (5+ Days).ΓÇ¥ Save it to ΓÇ£My reportsΓÇ¥ or ΓÇ£Shared reports,ΓÇ¥ depending on whether you want other people who have access to this Application Insights resource to see this cohort.
+ > Give your cohort a name, like *Engaged Users (5+ Days)*. Save it to *My reports* or *Shared reports*, depending on whether you want other people who have access to this Application Insights resource to see this cohort.
-6. Select **Back to Gallery**.
+1. Select **Back to Gallery**.
### What can you do by using this cohort?
-Open the Users tool. In the **Show** drop-down box, choose the cohort you created under **Users who belong to**.
+Open the Users tool. In the **Show** dropdown box, choose the cohort you created under **Users who belong to**.
-
-A few important things to notice:
+Important points to notice:
* You can't create this set through normal filters. The date logic is more advanced.
-* You can further filter this cohort by using the normal filters in the Users tool. So although the cohort is defined on 28-day windows, you can still adjust the time range in the Users tool to be 30, 60, or 90 days.
+* You can further filter this cohort by using the normal filters in the Users tool. Although the cohort is defined on 28-day windows, you can still adjust the time range in the Users tool to be 30, 60, or 90 days.
These filters support more sophisticated questions that are impossible to express through the query builder. An example is _people who were engaged in the past 28 days. How did those same people behave over the past 60 days?_ ## Example: Events cohort
-You can also make cohorts of events. In this section, you define a cohort of the events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers _active usage_ or a set related to a certain new feature.
-
-1. Select **Create a Cohort**
+You can also make cohorts of events. In this section, you define a cohort of events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers _active usage_ or a set related to a certain new feature.
-2. Select the **Template Gallery** tab. YouΓÇÖll see a collection of templates for various cohorts.
-
-3. Select **Events Picker**.
-
-4. In the **Activities** drop-down box, select the events you want to be in the cohort.
-
-5. Save the cohort and give it a name.
+1. Select **Create a Cohort**.
+1. Select the **Template Gallery** tab to see a collection of templates for various cohorts.
+1. Select **Events Picker**.
+1. In the **Activities** dropdown box, select the events you want to be in the cohort.
+1. Save the cohort and give it a name.
## Example: Active users where you modify a query
-The previous two cohorts were defined by using drop-down boxes. But you can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom.
-
+The previous two cohorts were defined by using dropdown boxes. You can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom.
1. Open the Cohorts tool, select the **Template Gallery** tab, and select **Blank Users cohort**.
- :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot of the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png":::
+ :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot that shows the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png":::
There are three sections:
- * A Markdown text section, where you describe the cohort in more detail for others on your team.
-
- * A parameters section, where you make your own parameters, like **Activities** and other drop-down boxes from the previous two examples.
- * A query section, where you define the cohort by using an analytics query.
+ * **Markdown text**: Where you describe the cohort in more detail for other members on your team.
+ * **Parameters**: Where you make your own parameters, like **Activities**, and other dropdown boxes from the previous two examples.
+ * **Query**: Where you define the cohort by using an analytics query.
- In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a ΓÇ£| summarize by user_IdΓÇ¥ clause to the query. This data is previewed below the query in a table, so you can make sure your query is returning results.
+ In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a `| summarize by user_Id` clause to the query. This data appears as a preview underneath the query in a table, so you can make sure your query is returning results.
> [!NOTE]
- > If you donΓÇÖt see the query, try resizing the section to make it taller and reveal the query.
+ > If you don't see the query, resize the section to make it taller and reveal the query.
-2. Copy and paste the following text into the query editor:
+1. Copy and paste the following text into the query editor:
```KQL union customEvents, pageViews | where client_CountryOrRegion == "United Kingdom" ```
-3. Select **Run Query**. If you don't see user IDs appear in the table, change to a country/region where your application has users.
+1. Select **Run Query**. If you don't see user IDs appear in the table, change to a country/region where your application has users.
-4. Save and name the cohort.
+1. Save and name the cohort.
-## Frequently asked questions
+## Frequently asked question
-_IΓÇÖve defined a cohort of users from a certain country/region. When I compare this cohort in the Users tool to just setting a filter on that country/region, I see different results. Why?_
+### I defined a cohort of users from a certain country/region. When I compare this cohort in the Users tool to setting a filter on that country/region, why do I see different results?
-Cohorts and filters are different. Suppose you have a cohort of users from the United Kingdom (defined like the previous example), and you compare its results to setting the filter ΓÇ£Country or region = United Kingdom.ΓÇ¥
+Cohorts and filters are different. Suppose you have a cohort of users from the United Kingdom (defined like the previous example), and you compare its results to setting the filter `Country or region = United Kingdom`:
* The cohort version shows all events from users who sent one or more events from the United Kingdom in the current time range. If you split by country or region, you likely see many countries and regions.
-* The filters version only shows events from the United Kingdom. But if you split by country or region, you see only the United Kingdom.
+* The filters version only shows events from the United Kingdom. If you split by country or region, you see only the United Kingdom.
## Learn more
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Host OS metrics *are* available and listed in the tables. Host OS metrics relate
> [!TIP] > A best practice is to use and configure the Azure Monitor agent to send guest OS performance metrics into the same Azure Monitor metric database where platform metrics are stored. The agent routes guest OS metrics through the [custom metrics](../essentials/metrics-custom-overview.md) API. You can then chart, alert, and otherwise use guest OS metrics like platform metrics. >
-> Alternatively or in addition, you can send the guest OS metrics to Azure Monitor Logs by using the same agent. There you can query on those metrics in combination with non-metric data by using Log Analytics.
+> Alternatively or in addition, you can send the guest OS metrics to Azure Monitor Logs by using the same agent. There you can query on those metrics in combination with non-metric data by using Log Analytics. Standard [Log Analytics workspace costs](https://azure.microsoft.com/pricing/details/monitor/) would then apply.
The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analytics agent, which were previously used for guest OS routing. For important additional information, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
This latest update adds a new column and reorders the metrics to be alphabetical
- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
-<!--Gen Date: Wed Feb 01 2023 09:43:49 GMT+0200 (Israel Standard Time)-->
+<!--Gen Date: Wed Feb 01 2023 09:43:49 GMT+0200 (Israel Standard Time)-->
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md
Use `az aks update` with the `-enable-azuremonitormetrics` option to install the
**Create a new default Azure Monitor workspace.**<br> If no Azure Monitor Workspace is specified, then a default Azure Monitor Workspace will be created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.
-This Azure Monitor Workspace will be in the region specific in [Region mappings](#region-mappings).
+This Azure Monitor Workspace is in the region specific in [Region mappings](#region-mappings).
```azurecli az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> ``` **Use an existing Azure Monitor workspace.**<br>
-If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data will be available in Grafana.
+If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data is available in Grafana.
```azurecli az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id>
This creates a link between the Azure Monitor workspace and the Grafana workspac
az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id> ```
-The output for each command will look similar to the following:
+The output for each command looks similar to the following:
```json "azureMonitorProfile": {
The output for each command will look similar to the following:
#### Optional parameters Following are optional parameters that you can use with the previous commands. -- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications.-- `--ksm-metric-labels-allow-list` is a comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional labels provide a list of resource names in their plural form and Kubernetes label keys you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications.
+- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications.
+- `--ksm-metric-labels-allow-list` is a comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more labels provide a list of resource names in their plural form and Kubernetes label keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications.
**Use annotations and labels.**
Following are optional parameters that you can use with the previous commands.
az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]" ```
-The output will be similar to the following:
+The output is similar to the following:
```json "azureMonitorProfile": {
The output will be similar to the following:
### Retrieve required values for Grafana resource From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace, then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
```json "properties": {
If you're using an existing Azure Managed Grafana instance that already has been
| `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |
- | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. |
| `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
-4. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will be similar to the following:
+4. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This is similar to the following:
```json {
Currently in bicep, there is no way to explicitly "scope" the Monitoring Data Re
From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
-If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
+If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace, then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace.
```json "properties": {
If you're using an existing Azure Managed Grafana instance that already has been
2. Download the parameter file from [here](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main bicep template. 3. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files in the same directory as the main bicep template. 4. Edit the values in the parameter file.
-5. The main bicep template creates all the required resources and uses 2 modules for creating the dcra and monitormetrics profile resources from the other two bicep files.
+5. The main bicep template creates all the required resources and uses two modules for creating the dcra and monitormetrics profile resources from the other two bicep files.
| Parameter | Value | |:|:|
If you're using an existing Azure Managed Grafana instance that already has been
| `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |
- | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. |
| `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
-6. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will be similar to the following:
+6. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This is similar to the following:
```json {
In this json, `full_resource_id_1` and `full_resource_id_2` were already in the
The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file.
+## [Azure Policy](#tab/azurepolicy)
+
+### Prerequisites
+
+- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`.
+- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created.
+
+### Download Azure policy rules and parameters and deploy
+
+1. Download the main Azure policy rules template from [here](https://aka.ms/AddonPolicyMetricsProfile) and save it as **AddonPolicyMetricsProfile.rules.json**.
+2. Download the parameter file from [here](https://aka.ms/AddonPolicyMetricsProfile.parameters) and save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template.
+3. Create the policy definition using a command like : `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules .\AddonPolicyMetricsProfile.rules.json --params .\AddonPolicyMetricsProfile.parameters.json`
+4. After creating the policy definition, go to Azure portal -> Policy -> Definitions and select the Policy definition you created.
+5. Click on 'Assign' and then go to the 'Parameters' tab and fill in the details. Then click 'Review + Create'.
+6. Now that the policy is assigned to the subscription, whenever you create a new cluster, which does not have Prometheus enabled, the policy will run and deploy the resources. If you want to apply the policy to existing AKS cluster, create a 'Remediation task' for that AKS cluster resource after going to the 'Policy Assignment'.
+7. Now you should see metrics flowing in the existing linked Grafana resource, which is linked with the corresponding Azure Monitor Workspace.
+
+In case you create a new Managed Grafana resource from Azure portal, please link it with the corresponding Azure Monitor Workspace from the 'Linked Grafana Workspaces' tab of the relevant Azure Monitor Workspace page. Please assign the role 'Monitoring Data Reader' to the Grafana MSI on the Azure Monitor Workspace resource so that it can read data for displaying the charts, using the instructions below.
+
+1. From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**.
+
+2. Copy the value of the `principalId` field for the `SystemAssigned` identity.
+
+```json
+"identity": {
+ "principalId": "00000000-0000-0000-0000-000000000000",
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "type": "SystemAssigned"
+ },
+```
+3. From the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** and then **Add role assignment**.
+4. Select `Monitoring Data Reader`.
+5. Select **Managed identity** and then **Select members**.
+6. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource.
+7. Click **Select** and then **Review+assign**.
### Deploy template
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md
Azure Monitor stores data in data stores for each of the pillars of observabilit
|Pillar of Observability/<br>Data Store|Description| |||
-|[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus based metrics](/articles/azure-monitor/essentials/prometheus-metrics-overview.md).|
+|[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus based metrics](essentials/prometheus-metrics-overview.md).|
|[Azure Monitor Logs](logs/data-platform-logs.md)|Logs are recorded system events. Logs can contain different types of data, be structured or free-form text, and they contain a timestamp. Azure Monitor stores structured and unstructured log data of all types in [Azure Monitor Logs](./logs/data-platform-logs.md). You can route data to [Log Analytics workspaces](./logs/log-analytics-overview.md) for querying and analysis.| |Traces|Distributed traces identify the series of related events that follow a user request through a distributed system. A trace measures the operation and performance of your application across the entire set of components in your system. Traces can be used to determine the behavior of application code and the performance of different transactions. Azure Monitor gets distributed trace data from the Application Insights SDK. The trace data is stored in a separate workspace in Azure Monitor Logs.| |Changes|Changes are a series of events in your application and resources. They're tracked and stored when you use the [Change Analysis](./change/change-analysis.md) service, which uses [Azure Resource Graph](../governance/resource-graph/overview.md) as its store. Change Analysis helps you understand which changes, such as deploying updated code, may have caused issues in your systems.|
azure-monitor Vminsights Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md
Last updated 06/08/2022
# Use the Map feature of VM insights to understand application components In VM insights, you can view discovered application components on Windows and Linux virtual machines (VMs) that run in Azure or your environment. You can observe the VMs in two ways. View a map directly from a VM or view a map from Azure Monitor to see the components across groups of VMs. This article will help you understand these two viewing methods and how to use the Map feature.
-For information about configuring VM insights, see [Enable VM insights](./vminsights-enable-overview.md).
+For information about configuring VM insights, see [Enable VM insights](vminsights-enable-overview.md).
## Prerequisites
-To enable the map feature in VM insights, the virtual machine requires one of the following. See [Enable VM insights on unmonitored machine](vminsights-maps.md) for details on each.
+To enable the map feature in VM insights, the virtual machine requires one of the following. See [Enable VM insights on unmonitored machine](vminsights-enable-overview.md) for details on each.
- Azure Monitor agent with **processes and dependencies** enabled. - Log Analytics agent enabled for VM insights.
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 02/02/2023 Last updated : 02/21/2023 # Create an SMB volume for Azure NetApp Files
Before creating an SMB volume, you need to create an Active Directory connection
## Add an SMB volume
-1. Click the **Volumes** blade from the Capacity Pools blade.
+1. Select the **Volumes** blade from the Capacity Pools blade.
![Navigate to Volumes](../media/azure-netapp-files/azure-netapp-files-navigate-to-volumes.png)
-2. Click **+ Add volume** to create a volume.
+2. Select **+ Add volume** to create a volume.
The Create a Volume window appears.
-3. In the Create a Volume window, click **Create** and provide information for the following fields under the Basics tab:
+3. In the Create a Volume window, select **Create** and provide information for the following fields under the Basics tab:
* **Volume name** Specify the name for the volume that you are creating.
Before creating an SMB volume, you need to create an Active Directory connection
The **Available quota** field shows the amount of unused space in the chosen capacity pool that you can use towards creating a new volume. The size of the new volume must not exceed the available quota.
+ * **Large Volume**
+ If the quota of your volume is less than 100 TiB, select **No**. If the quota of your volume is greater than 100 TiB, select **Yes**.
+ [!INCLUDE [Large volumes warning](includes/large-volumes-notice.md)]
+ * **Throughput (MiB/S)** If the volume is created in a manual QoS capacity pool, specify the throughput you want for the volume.
Before creating an SMB volume, you need to create an Active Directory connection
Specify the subnet that you want to use for the volume. The subnet you specify must be delegated to Azure NetApp Files.
- If you haven't delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
+ If you haven't delegated a subnet, you can select **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.
![Create a volume](../media/azure-netapp-files/azure-netapp-files-new-volume.png)
Before creating an SMB volume, you need to create an Active Directory connection
* **Availability zone** This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md).
- * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu.
+ * If you want to apply an existing snapshot policy to the volume, select **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu.
For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md). ![Show advanced selection](../media/azure-netapp-files/volume-create-advanced-selection.png)
-4. Click **Protocol** and complete the following information:
+4. Select **Protocol** and complete the following information:
* Select **SMB** as the protocol type for the volume. * Select your **Active Directory** connection from the drop-down list.
Before creating an SMB volume, you need to create an Active Directory connection
![Screenshot that describes the Protocol tab of creating an SMB volume.](../media/azure-netapp-files/azure-netapp-files-protocol-smb.png)
-5. Click **Review + Create** to review the volume details. Then click **Create** to create the SMB volume.
+5. Select **Review + Create** to review the volume details. Then select **Create** to create the SMB volume.
The volume you created appears in the Volumes page.
You can modify SMB share permissions using Microsoft Management Console (MMC).
## Next steps * [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)
+* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
* [Mount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
-* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
+* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
* [Enable Continuous Availability on existing SMB volumes](enable-continuous-availability-existing-SMB.md) * [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md)
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na Previously updated : 11/08/2022 Last updated : 02/23/2023 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
The **Available quota** field shows the amount of unused space in the chosen capacity pool that you can use towards creating a new volume. The size of the new volume must not exceed the available quota.
+ * **Large Volume**
+ If the quota of your volume is less than 100 TiB, select **No**. If the quota of your volume is greater than 100 TiB, select **Yes**.
+ [!INCLUDE [Large volumes warning](includes/large-volumes-notice.md)]
+ * **Throughput (MiB/S)** If the volume is created in a manual QoS capacity pool, specify the throughput you want for the volume.
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md) * [Configure access control lists on NFSv4.1 with Azure NetApp Files](configure-access-control-lists.md) * [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
+* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 02/21/2023 Last updated : 02/23/2023 # Resource limits for Azure NetApp Files
The following table describes resource limits for Azure NetApp Files:
| Number of volumes per subscription | 500 | Yes | | Number of volumes per capacity pool | 500 | Yes | | Number of snapshots per volume | 255 | No |
-| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No |
+| Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No |
| Minimum size of a single capacity pool | 2 TiB* | No |
-| Maximum size of a single capacity pool | 500 TiB | No |
-| Minimum size of a single volume | 100 GiB | No |
-| Maximum size of a single volume | 100 TiB | No |
+| Maximum size of a single capacity pool | 500 TiB | Yes |
+| Minimum size of a single regular volume | 100 GiB | No |
+| Maximum size of a single regular volume | 100 TiB | No |
+| Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 102,401 GiB | No |
+| Maximum size of a single large volume | 500 TiB | No |
| Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No | | Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No |
-| Maximum number of files ([`maxfiles`](#maxfiles)) per volume | 106,255,630 | Yes |
+| Maximum number of files [`maxfiles`](#maxfiles) per volume | 106,255,630 | Yes |
| Maximum number of export policy rules per volume | 5 | No |
+| Maximum number of quota rules per volume | 100 | Yes |
| Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No | | Maximum assigned throughput for a manual QoS volume | 4,500 MiB/s | No | | Number of cross-region replication data protection volumes (destination volumes) | 10 | Yes |
Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limi
The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 21,251,126. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
+**For volumes up to 100 TiB in size:**
+ | Volume size (quota) | Automatic readjustment of the `maxfiles` limit | |-|-|
-| <= 1 TiB | 21,251,126 |
+| <= 1 TiB | 21,251,126 |
| > 1 TiB but <= 2 TiB | 42,502,252 | | > 2 TiB but <= 3 TiB | 63,753,378 | | > 3 TiB but <= 4 TiB | 85,004,504 |
-| > 4 TiB | 106,255,630 |
+| > 4 TiB but <= 100 TiB | 106,255,630 |
>[!IMPORTANT] > If your volume has a volume size (quota) of more than 4 TiB and you want to increase the `maxfiles` limit, you must initiate [a support request](#request-limit-increase).
You can increase the `maxfiles` limit to 531,278,150 if your volume quota is at
>[!IMPORTANT] > Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, if you have crossed the 63,753,378 `maxfiles` limit, the volume quota cannot be reduced below its corresponding index of 2 TiB.
+**For [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes):**
+
+| Volume size (quota) | Automatic readjustment of the `maxfiles` limit |
+| - | - |
+| > 100 TiB | 2,550,135,120 |
+
+You can increase the `maxfiles` limit beyond 2,550,135,120 using a support request. For every 2,550,135,120 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 120 TiB. For example, if you increase `maxfiles` limit from 2,550,135,120 to 5,100,270,240 files (or any number in between), you need to increase the volume quota to at least 240 TiB.
+
+The maximum `maxfiles` value for a 500 TiB volume is 10,625,563,000 files.
+ You cannot set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens to a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](#request-limit-increase) for the volume. ## Request limit increase
You can create an Azure support request to increase the adjustable limits from t
## Next steps - [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
+- [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
- [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md) - [Regional capacity quota for Azure NetApp Files](regional-capacity-quota.md) - [Request region access for Azure NetApp Files](request-region-access.md)
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
na Previously updated : 02/21/2023 Last updated : 02/23/2023 # Storage hierarchy of Azure NetApp Files
Understanding how capacity pools work helps you select the right capacity pool t
### General rules of capacity pools - A capacity pool is measured by its provisioned capacity.
- For more information, see [QoS types](#qos_types).
+ For more information, see [QoS types](#qos_types).
- The capacity is provisioned by the fixed SKUs that you purchased (for example, a 4-TiB capacity). - A capacity pool can have only one service level. - Each capacity pool can belong to only one NetApp account. However, you can have multiple capacity pools within a NetApp account.
Understanding how capacity pools work helps you select the right capacity pool t
### <a name="qos_types"></a>Quality of Service (QoS) types for capacity pools
-The QoS type is an attribute of a capacity pool. Azure NetApp Files provides two QoS types of capacity pools--*auto (default)* and *manual*.
+The QoS type is an attribute of a capacity pool. Azure NetApp Files provides two QoS types of capacity pools: *auto (default)* and *manual*.
#### *Automatic (or auto)* QoS type
In a manual QoS capacity pool, you can assign the capacity and throughput for a
##### Example of using manual QoS
-When you use a manual QoS capacity pool with, for example, an SAP HANA system, an Oracle database, or other workloads requiring multiple volumes, the capacity pool can be used to create these application volumes. Each volume can provide the individual size and throughput to meet the application requirements. See [Throughput limit examples of volumes in a manual QoS capacity pool](azure-netapp-files-service-levels.md#throughput-limit-examples-of-volumes-in-a-manual-qos-capacity-pool) for details about the benefits.
+When you use a manual QoS capacity pool with, for example, an SAP HANA system, an Oracle database, or other workloads requiring multiple volumes, the capacity pool can be used to create these application volumes. Each volume can provide the individual size and throughput to meet the application requirements. See [Throughput limit examples of volumes in a manual QoS capacity pool](azure-netapp-files-service-levels.md#throughput-limit-examples-of-volumes-in-a-manual-qos-capacity-pool) for details about the benefits.
## <a name="volumes"></a>Volumes
When you use a manual QoS capacity pool with, for example, an SAP HANA system, a
- A volume's capacity consumption counts against its pool's provisioned capacity. - A volumeΓÇÖs throughput consumption counts against its poolΓÇÖs available throughput. See [Manual QoS type](#manual-qos-type). - Each volume belongs to only one pool, but a pool can contain multiple volumes.
+- Volumes contain a capacity of between 4 TiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 100 TiB and 500 TiB.
+
+## Large volumes
+
+Azure NetApp Files allows you to create volumes up to 500 TiB in size, exceeding the previous 100-TiB limit. Large volumes begin at a capacity of 102,401 GiB and scale up to 500 TiB. Regular Azure NetApp Files volumes are offered between 100 GiB and 102,400 GiB.
+
+For more information, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md).
## Next steps
When you use a manual QoS capacity pool with, for example, an SAP HANA system, a
- [Performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md) - [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md) - [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md)
+- [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
azure-netapp-files Backup Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md
na Previously updated : 12/12/2022 Last updated : 02/23/2023 # Requirements and considerations for Azure NetApp Files backup
This article describes the requirements and considerations you need to be aware
You need to be aware of several requirements and considerations before using Azure NetApp Files backup: * Azure NetApp Files backup is available in the regions associated with your Azure NetApp Files subscription.
-Azure NetApp Files backup in a region can only protect an Azure NetApp Files volume that is located in that same region. For example, backups created by the service in West US 2 for a volume located in West US 2 are sent to Azure storage that is located also in West US 2. Azure NetApp Files does not support backups or backup replication to a different region.
+Azure NetApp Files backup in a region can only protect an Azure NetApp Files volume located in that same region. For example, backups created by the service in West US 2 for a volume located in West US 2 are sent to Azure storage also located in West US 2. Azure NetApp Files doesn't support backups or backup replication to a different region.
* There can be a delay of up to 5 minutes in displaying a backup after the backup is actually completed.
-* For large volumes (greater than 10 TB), it can take multiple hours to transfer all the data from the backup media.
+* For volumes larger than 10 TB, it can take multiple hours to transfer all the data from the backup media.
-* Currently, the Azure NetApp Files backup feature supports backing up the daily, weekly, and monthly local snapshots created by the associated snapshot policy to the Azure storage. Hourly backups are not currently supported.
+* Currently, the Azure NetApp Files backup feature supports backing up the daily, weekly, and monthly local snapshots created by the associated snapshot policy to the Azure storage. Hourly backups aren't currently supported.
-* Azure NetApp Files backup uses the [Zone-Redundant storage](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) (ZRS) account that replicates the data synchronously across three Azure availability zones in the region, except for the regions listed below where only [Locally Redundant Storage](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) (LRS) storage is supported:
+* Azure NetApp Files backup uses the [Zone-Redundant storage](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) (ZRS) account that replicates the data synchronously across three Azure availability zones in the region, except for the regions listed where only [Locally Redundant Storage](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) (LRS) storage is supported:
* West US LRS can recover from server-rack and drive failures. However, if a disaster such as a fire or flooding occurs within the data center, all replicas of a storage account using LRS might be lost or unrecoverable. * Using policy-based (scheduled) Azure NetApp Files backup requires that snapshot policy is configured and enabled. See [Manage snapshots by using Azure NetApp Files](azure-netapp-files-manage-snapshots.md).
- The volume that needs to be backed up requires a configured snapshot policy for creating snapshots. The configured number of backups are stored in the Azure storage.
+ A configured snapshot policy for snapshots is required for the volume needing backup. The policy will also set the number of backups stored in Azure storage.
-* If an issue occurs (for example, no sufficient space left on the volume) and causes the snapshot policy to stop creating new snapshots, the backup feature will not have any new snapshots to back up.
+* If an issue occurs (for example, no sufficient space left on the volume) and causes the snapshot policy to stop creating new snapshots, the backup feature won't have any new snapshots to back up.
-* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. It is not supported on a cross-region replication *destination* volume.
+* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a cross-region replication *destination* volume.
-* [Reverting a volume using snapshot revert](snapshots-revert-volume.md) is not supported on Azure NetApp Files volumes that have backups.
+* [Reverting a volume using snapshot revert](snapshots-revert-volume.md) isn't supported on Azure NetApp Files volumes that have backups.
-* See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups.
+* See [Restore a backup to a new volume](backup-restore-new-volume.md) for other considerations related to restoring backups.
* [Disabling backups](backup-disable.md) for a volume will delete all the backups stored in the Azure storage for that volume. If you delete a volume, the backups will remain. If you no longer need the backups, you should [manually delete the backups](backup-delete.md).
-* If you need to delete a parent resource group or subscription that contains backups, you should delete any backups first. Deleting the resource group or subscription will not delete the backups. You can remove backups by [disabling backups](backup-disable.md) or [manually deleting the backups](backup-disable.md).
-
+* If you need to delete a parent resource group or subscription that contains backups, you should delete any backups first. Deleting the resource group or subscription won't delete the backups. You can remove backups by [disabling backups](backup-disable.md) or [manually deleting the backups](backup-disable.md).
## Next steps
azure-netapp-files Configure Ldap Over Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md
Title: Configure ADDS LDAP over TLS for Azure NetApp Files | Microsoft Docs
-description: Describes how to configure ADDS LDAP over TLS for Azure NetApp Files, including root CA certificate management.
+ Title: Configure AD DS LDAP over TLS for Azure NetApp Files | Microsoft Docs
+description: Describes how to configure AD DS LDAP over TLS for Azure NetApp Files, including root CA certificate management.
documentationcenter: ''
na Previously updated : 01/25/2023 Last updated : 02/23/2023
-# Configure ADDS LDAP over TLS for Azure NetApp Files
+# Configure AD DS LDAP over TLS for Azure NetApp Files
You can use LDAP over TLS to secure communication between an Azure NetApp Files volume and the Active Directory LDAP server. You can enable LDAP over TLS for NFS, SMB, and dual-protocol volumes of Azure NetApp Files. ## Considerations * DNS PTR records must exist for each AD DS domain controller assigned to the **AD Site Name** specified in the Azure NetApp Files Active Directory connection.
-* PTR records must exist for all domain controllers in the site for ADDS LDAP over TLS to function properly.
+* PTR records must exist for all domain controllers in the site for AD DS LDAP over TLS to function properly.
## Generate and export root CA certificate If you do not have a root CA certificate, you need to generate one and export it for use with LDAP over TLS authentication.
-1. Follow [Install the Certification Authority](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) to install and configure ADDS Certificate Authority.
+1. Follow [Install the Certification Authority](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) to install and configure AD DS Certificate Authority.
2. Follow [View certificates with the MMC snap-in](/dotnet/framework/wcf/feature-details/how-to-view-certificates-with-the-mmc-snap-in) to use the MMC snap-in and the Certificate Manager tool. Use the Certificate Manager snap-in to locate the root or issuing certificate for the local device. You should run the Certificate Management snap-in commands from one of the following settings:
If you do not have a root CA certificate, you need to generate one and export it
## Enable LDAP over TLS and upload root CA certificate
-1. Go to the NetApp account that is used for the volume, and click **Active Directory connections**. Then, click **Join** to create a new AD connection or **Edit** to edit an existing AD connection.
+1. Go to the NetApp account used for the volume, and select **Active Directory connections**. Then, select **Join** to create a new AD connection or **Edit** to edit an existing AD connection.
-2. In the **Join Active Directory** or **Edit Active Directory** window that appears, select the **LDAP over TLS** checkbox to enable LDAP over TLS for the volume. Then click **Server root CA Certificate** and upload the [generated root CA certificate](#generate-and-export-root-ca-certificate) to use for LDAP over TLS.
+2. In the **Join Active Directory** or **Edit Active Directory** window that appears, select the **LDAP over TLS** checkbox to enable LDAP over TLS for the volume. Then select **Server root CA Certificate** and upload the [generated root CA certificate](#generate-and-export-root-ca-certificate) to use for LDAP over TLS.
![Screenshot that shows the LDAP over TLS option](../media/azure-netapp-files/ldap-over-tls-option.png)
To resolve the error condition, upload a valid root CA certificate to your NetAp
Disabling LDAP over TLS stops encrypting LDAP queries to Active Directory (LDAP server). There are no other precautions or impact on existing ANF volumes.
-1. Go to the NetApp account that is used for the volume and click **Active Directory connections**. Then click **Edit** to edit the existing AD connection.
+1. Go to the NetApp account that is used for the volume and select **Active Directory connections**. Then select **Edit** to edit the existing AD connection.
-2. In the **Edit Active Directory** window that appears, deselect the **LDAP over TLS** checkbox and click **Save** to disable LDAP over TLS for the volume.
+2. In the **Edit Active Directory** window that appears, deselect the **LDAP over TLS** checkbox and select **Save** to disable LDAP over TLS for the volume.
## Next steps
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 12/8/2022 Last updated : 02/23/2023 # Create a dual-protocol volume for Azure NetApp Files
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
The **Available quota** field shows the amount of unused space in the chosen capacity pool that you can use towards creating a new volume. The size of the new volume must not exceed the available quota.
+ * **Large Volume**
+ If the quota of your volume is less than 100 TiB, select **No**. If the quota of your volume is greater than 100 TiB, select **Yes**.
+ [!INCLUDE [Large volumes warning](includes/large-volumes-notice.md)]
+ * **Throughput (MiB/S)** If the volume is created in a manual QoS capacity pool, specify the throughput you want for the volume.
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
## Next steps * [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)
+* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md)
* [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md) * [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md).
azure-netapp-files Cross Region Replication Create Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-create-peering.md
na Previously updated : 11/02/2021 Last updated : 02/23/2023 # Create volume replication for Azure NetApp Files
To authorize the replication, you need to obtain the resource ID of the replicat
* [Manage disaster recovery](cross-region-replication-manage-disaster-recovery.md) * [Delete volume replications or volumes](cross-region-replication-delete.md) * [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md)
+* [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md)
* [Manage Azure NetApp Files volume replication with the CLI](/cli/azure/netappfiles/volume/replication)
azure-netapp-files Cross Region Replication Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-delete.md
na Previously updated : 11/18/2020 Last updated : 01/17/2023 # Delete volume replications or volumes
If you want to delete the source or destination volume, you must perform the fol
* [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md) * [Display health status of replication relationship](cross-region-replication-display-health-status.md) * [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md)-
+* [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md)
azure-netapp-files Cross Region Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md
na Previously updated : 05/18/2022 Last updated : 02/23/2023
This article describes requirements and considerations about [using the volume cross-region replication](cross-region-replication-create-peering.md) functionality of Azure NetApp Files. - ## Requirements and considerations * Azure NetApp Files replication is only available in certain fixed region pairs. See [Supported region pairs](cross-region-replication-introduction.md#supported-region-pairs).
This article describes requirements and considerations about [using the volume c
* You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You cannot delete manual snapshots for the destination volume until the replication relationship is broken. * You can't revert a source or destination volume of cross-region replication to a snapshot. The snapshot revert functionality is greyed out for volumes in a replication relationship. -- ## Next steps * [Create volume replication](cross-region-replication-create-peering.md) * [Display health status of replication relationship](cross-region-replication-display-health-status.md)
azure-netapp-files Cross Zone Replication Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-requirements-considerations.md
na Previously updated : 12/16/2022 Last updated : 02/23/2023 # Requirements and considerations for using cross-zone replication
This article describes requirements and considerations about [using the volume c
## Requirements and considerations * The cross-zone replication feature uses the [availability zone volume placement feature](use-availability-zones.md) of Azure NetApp Files.
- * You can only use cross-zone replication in regions where the availability zone volume placement is supported. [!INCLUDE [Azure NetApp Files cross-zone-replication supported regions](includes/cross-zone-regions.md)]
-* To establish cross-zone replication, the source volume needs to be created in an availability zone.
+ * You can only use cross-zone replication in regions that support the availability zone volume placement. [!INCLUDE [Azure NetApp Files cross-zone-replication supported regions](includes/cross-zone-regions.md)]
+* To establish cross-zone replication, you must create the source volume in an availability zone.
* You canΓÇÖt use cross-zone replication and cross-region replication together on the same source volume.
-* SMB volumes are supported along with NFS volumes. Replication of SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or AD DS Domain Controllers that are reachable from the delegated subnet in the destination zone. For more information, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections).
+* You can use cross-zone replication with SMB and NFS volumes. Replication of SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or AD DS Domain Controllers that are reachable from the delegated subnet in the destination zone. For more information, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections).
* The destination account must be in a different zone from the source volume zone. You can also select an existing NetApp account in a different zone.
-* The replication destination volume is read-only until you fail over to the destination zone to enable the destination volume for read and write. For more information about the failover process, refer to [fail over to the destination volume](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume).
+* The replication destination volume is read-only until you fail over to the destination zone to enable the destination volume for read and write. For more information about the failover process, see [fail over to the destination volume](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume).
* Azure NetApp Files replication doesn't currently support multiple subscriptions; all replications must be performed under a single subscription. * See [resource limits](azure-netapp-files-resource-limits.md) for the maximum number of cross-zone destination volumes. You can open a support ticket to [request a limit increase](azure-netapp-files-resource-limits.md#request-limit-increase) in the default quota of replication destination volumes (per subscription in a region). * There can be a delay up to five minutes for the interface to reflect a newly added snapshot on the source volume.
-* Cascading and fan in/out topologies aren't supported.
-* Configuring volume replication for source volumes created from snapshot isn't supported at this time.
-* After you set up cross-zone replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete SnapMirror snapshots until replication relationship and volume is deleted.
+* Cross-zone replication does not support cascading and fan in/out topologies.
+* At this time, you can't configure volume replication for source volumes created from snapshot with cross-zone replication.
+* After you set up cross-zone replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete SnapMirror snapshots until you delete the replication relationship and volume.
* You cannot mount a dual-protocol volume until you [authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume) and the initial [transfer](cross-region-replication-display-health-status.md#display-replication-status) happens.
-* You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You cannot delete manual snapshots for the destination volume until the replication relationship is broken.
-* You can't revert a source or destination volume of cross-zone replication to a snapshot. The snapshot revert functionality is greyed out for volumes in a replication relationship.
+* You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after you've deleted replication relationship. You cannot delete manual snapshots for the destination volume until you break the replication relationship.
+* You can't revert a source or destination volume of cross-zone replication to a snapshot. The snapshot revert functionality is unavailable out for volumes in a replication relationship.
+* You can't currently use cross-zone replication with [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) (larger than 100 TiB).
## Next steps * [Understand cross-zone replication](cross-zone-replication-introduction.md)
azure-netapp-files Default Individual User Group Quotas Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/default-individual-user-group-quotas-introduction.md
+
+ Title: Understand default and individual user and group quotas for Azure NetApp Files volumes | Microsoft Docs
+description: Helps you understand the use cases of managing default and individual user and group quotas for Azure NetApp Files volumes.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 02/23/2023++
+# Understand default and individual user and group quotas
+
+User and group quotas enable you to restrict the logical space that a user or group can consume in a volume. User and group quotas apply to a specific Azure NetApp Files volume.
+
+## Introduction
+
+You can restrict user capacity consumption on Azure NetApp Files volumes by setting user and/or group quotas on volumes. User and group quotas differ from volume quotas in the way that they further restrict volume capacity consumption at the user and group level.
+
+To set a [volume quota](volume-quota-introduction.md), you can use the Azure portal or the Azure NetApp Files API to specify the maximum storage capacity for a volume. Once you set the volume quota, it defines the size of the volume, and there's no restriction on how much capacity any user can consume.
+
+To restrict usersΓÇÖ capacity consumption, you can set a user and/or group quota. You can set default and/or individual quotas. Once you set user or group quotas, users can't store more data in the volume than the specified user or group quota limit.
+
+By combining volume and user quotas, you can ensure that storage capacity is distributed efficiently and prevent any single user, or group of users, from consuming excessive amounts of storage.
+
+To understand considerations and manage user and group quotas for Azure NetApp Files volumes, see [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md).
+
+## Behavior of default and individual user and group quotas
+
+This section describes the behavior of user and group quotas.
+
+The following concepts and behavioral aspects apply to user and group quotas:
+* The volume capacity that can be consumed can be restricted at the user and/or group level.
+ * User quotas are available for SMB, NFS, and dual-protocol volumes.
+ * Group quotas are **not** supported on SMB and dual-protocol volumes.
+* When a user or group consumption reaches the maximum configured quota, further space consumption is prohibited.
+* Individual user quota takes precedence over default user quota.
+* Individual group quota takes precedence over default group quota.
+* If you set group quota and user quota, the most restrictive quota is the effective quota.
+
+The following subsections describe and depict the behavior of the various quota types.
+
+### Default user quota
+
+A default user quota automatically applies a quota limit to *all users* accessing the volume without creating separate quotas for each target user. Each user can only consume the amount of storage as defined by the default user quota setting. No single user can exhaust the volumeΓÇÖs capacity, as long as the default user quota is less than the volume quota. The following diagram depicts this behavior.
++
+### Individual user quota
+
+An individual user quota applies a quota to *individual target user* accessing the volume. You can specify the target user by a UNIX user ID (UID) or a Windows security identifier (SID), depending on volume protocol (NFS or SMB). You can define multiple individual user quota settings on a volume. Each user can only consume the amount of storage as defined by their individual user quota setting. No single user can exhaust the volumeΓÇÖs capacity, as long as the individual user quota is less than the volume quota. Individual user quotas override a default user quota, where applicable. The following diagram depicts this behavior.
++
+### Combining default and individual user quotas
+
+You can create quota exceptions for specific users by allowing those users less or more capacity than a default user quota setting by combining default and individual user quota settings. In the following example, individual user quotas are set for `user1`, `user2`, and `user3`. Any other user is subjected to the default user quota setting. The individual quota settings can be smaller or larger than the default user quota setting. The following diagram depicts this behavior.
++
+### Default group quota
+
+A default group quota automatically applies a quota limit to *all users within all groups* accessing the volume without creating separate quotas for each target group. The total consumption for all users in any group can't exceed the group quota limit. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. A single user can potentially consume the entire group quota. The following diagram depicts this behavior.
++
+### Individual group quota
+
+An individual group quota applies a quota to *all users within an individual target group* accessing the volume. The total consumption for all users *in that group* can't exceed the group quota limit. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. You specify the group by a UNIX group ID (GID). Individual group quotas override default group quotas where applicable. The following diagram depicts this behavior.
++
+### Combining individual and default group quota
+
+You can create quota exceptions for specific groups by allowing those groups less or more capacity than a default group quota setting by combining default and individual group quota settings. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. In the following example, individual group quotas are set for `group1` and `group2`. Any other group is subjected to the default group quota setting. The individual group quota settings can be smaller or larger than the default group quota setting. The following diagram depicts this scenario.
++
+### Combining default and individual user and group quotas
+
+You can combine the various previously described quota options to achieve very specific quota definitions. You can create very specific quota definitions by (optionally) starting with defining a default group quota, followed by individual group quotas matching your requirements. Then you can further tighten individual user consumption by first (optionally) defining a default user quota, followed by individual user quotas matching individual user requirements. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. In the following example, a default group quota has been set as well as individual group quotas for `group1` and `group2`. Furthermore, a default user quota has been set as well as individual quotas for `user1`, `user2`, `user3`, `user5`, and `userZ`. The following diagram depicts this scenario.
++
+## Observing user quota settings and consumption
+
+Users can observe user quota settings and consumption from their client systems connected to the NFS, SMB, or dual-protocol volumes respectively. Azure NetApp Files currently doesn't support reporting of group quota settings and consumption explicitly. The following sections describe how users can view their user quota setting and consumption.
+
+### Windows client
+
+Windows users can observe their user quota and consumption in Windows Explorer and by running the dir command. Assume a scenario where a 2-TiB volume with a 100-MiB default or individual user quota has been configured. On the client, this scenario is represented as follows:
+
+* Administrator view:
+
+ :::image type="content" source="../media/azure-netapp-files/user-quota-administrator-view.png" alt-text="Screenshot showing administrator view of user quota and consumption.":::
+
+* User view:
+
+ :::image type="content" source="../media/azure-netapp-files/user-quota-user-view.png" alt-text="Screenshot showing user view of user quota and consumption.":::
+
+### Linux client
+
+Linux users can observe their *user* quota and consumption by using the [`quota(1)`](https://man7.org/linux/man-pages/man1/quota.1.html) command. Assume a scenario where a 2-TiB volume with a 100-MiB default or individual user quota has been configured. On the client, this scenario is represented as follows:
++
+Azure NetApp Files currently doesn't support group quota reporting. However, you know you've reached your groupΓÇÖs quota limit when you receive a `Disk quota exceeded` error in writing to the volume while you havenΓÇÖt reached your user quota yet.
+
+In the following scenario, users `user4` and `user5` are members of `group2`. The group `group2` has a 200-MiB default or individual group quota assigned. The volume is already populated with 150 MiB of data owned by user `user4`. User `user5` appears to have a 100-MiB quota available as reported by the `quota(1)` command, but `user5` canΓÇÖt consume more than 50 MiB due to the remaining group quota for `group2`. User `user5` receives a `Disk quota exceeded` error message after writing 50 MiB, despite not reaching the user quota.
++
+> [!IMPORTANT]
+> For quota reporting to work, the client needs access to port 4049/UDP on the Azure NetApp Files volumesΓÇÖ storage endpoint. When using NSGs with standard network features on the Azure NetApp Files delegated subnet, make sure that access is enabled.
+
+## Next steps
+
+* [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md)
+* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
+* [Security identifiers](/windows-server/identity/ad-ds/manage/understand-security-identifiers)
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
+
+ Title: Requirements and considerations for large volumes | Microsoft Docs
+description: Describes the requirements and considerations you need to be aware of before using large volumes.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
+++
+ na
+ Last updated : 02/23/2023++
+# Requirements and considerations for large volumes (preview)
+
+This article describes the requirements and considerations you need to be aware of before using [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) on Azure NetApp Files.
+
+## Register the feature
+
+The large volumes feature for Azure NetApp Files is currently in public preview. This preview is offered under the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) and is controlled via Azure Feature Exposure Control (AFEC) settings on a per subscription basis.
+
+To enroll in the preview for large volumes, use the [large volumes preview sign-up form](https://aka.ms/anflargevolumespreviewsignup).
+
+## Requirements and considerations
+
+* Existing regular volumes can't be resized over 100 TiB. You can't convert regular Azure NetApp Files volumes to large volumes.
+* You must create a large volume at a size greater than 100 TiB. A single volume can't exceed 500 TiB.
+* You can't resize a large volume to less than 100 TiB. You can only resize a large volume can up to 30% of lowest provisioned size.
+* Large volumes are currently not supported with Azure NetApp Files backup.
+* Large volumes are not currently supported with cross-region replication.
+* You can't create a large volume with application volume groups.
+* Large volumes aren't currently supported with cross-zone replication.
+* The SDK for large volumes isn't currently available.
+* Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You'll be able to grow to 500 TiB with the throughput ceiling as per the table below.
+
+| Capacity tier | Volume size (TiB) | Throughput (MiB/s) |
+| | | |
+| Standard | 100 to 500 | 1,600 |
+| Premium | 100 to 500 | 6,400 |
+| Ultra | 100 to 500 | 10,240 |
+
+## Supported regions
+
+Support for Azure NetApp Files large volumes is available in the following regions:
+
+* Australia East
+* Australia Southeast
+* Brazil South
+* Canada Central
+* Central US
+* East US
+* East US 2
+* Germany West Central
+* Japan East
+* North Central US
+* North Europe
+* South Central US
+* Switzerland North
+* UAE North
+* UK West
+* UK South
+* West Europe
+* West US
+* West US 2
+* West US 3
+
+## Configure large volumes
+
+>[!IMPORTANT]
+>Before you can use large volumes, you must first request [an increase in regional capacity quota](azure-netapp-files-resource-limits.md#request-limit-increase).
+
+Once your [regional capacity quota](regional-capacity-quota.md) has increased, you can create volumes that are up to 500 TiB in size. When creating a volume, after you designate the volume quota, you must select **Yes** for the **Large volume** field. Once created, you can manage your large volumes in the same manner as regular volumes.
+
+## Next steps
+
+* [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)
+* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
+* [Create an NFS volume](azure-netapp-files-create-volumes.md)
+* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md)
+* [Create a dual-protocol volume](create-volumes-dual-protocol.md)
azure-netapp-files Manage Default Individual User Group Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md
+
+ Title: Manage default and individual user and group quotas for Azure NetApp Files volumes | Microsoft Docs
+description: Describes the considerations and steps for managing user and group quotas for Azure NetApp Files volumes.
++++++ Last updated : 02/23/2023+
+# Manage default and individual user and group quotas for a volume
+
+This article explains the considerations and steps for managing user and group quotas on Azure NetApp Files volumes. To understand the use cases for this feature, see [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md).
+
+## Quotas in cross-region replication relationships
+
+Quota rules are synced from cross-region replication (CRR) source to destination volumes. Quota rules that you create, delete, or update on a CRR source volume automatically applies to the CRR destination volume.
+
+Quota rules only come into effect on the CRR destination volume after the replication relationship is deleted because the destination volume is read-only. To learn how to break the replication relationship, see [Delete volume replications](cross-region-replication-delete.md#delete-volume-replications). If source volumes have quota rules and you create the CRR destination volume at the same time as the source volume, all the quota rules are created on destination volume.
+
+## Considerations
+
+* A quota rule is specific to a volume and is applied to an existing volume.
+* Deleting a volume results in deleting all the associated quota rules for that volume.
+* You can create a maximum number of 100 quota rules for a volume. You can [request limit increase](azure-netapp-files-resource-limits.md#request-limit-increase) through the portal.
+* Azure NetApp Files doesn't support individual group quota and default group quota for SMB and dual protocol volumes.
+* Group quotas track the consumption of disk space for files owned by a particular group. A file can only be owned by exactly one group.
+* Auxiliary groups only help in permission checks. You can't use auxiliary groups to restrict the quota (disk space) for a file.
+* In a cross-region replication setting:
+ * Currently, Azure NetApp Files doesn't support syncing quota rules to the destination (data protection) volume.
+ * You canΓÇÖt create quota rules on the destination volume until you [delete the replication](cross-region-replication-delete.md).
+ * You need to manually create quota rules on the destination volume if you want them for the volume, and you can do so only after you delete the replication.
+ * If a quota rule is in the error state after you delete the replication relationship, you need to delete and re-create the quota rule on the destination volume.
+ * During sync or reverse resync operations:
+ * If you create, update, or delete a rule on a source volume, you must perform the same operation on the destination volume.
+ * If you create, update, or delete a rule on a destination volume after the deletion of the replication relationship, the rule will be reverted to keep the source and destination volumes in sync.
+* If you're using [large volumes](large-volumes-requirements-considerations.md) (volumes larger than 100 TiB):    
+ * The space and file usage in a large volume might exceed as much as five percent more than the configured hard limit before the quota limit is enforced and rejects traffic.   
+ * To provide optimal performance, the space consumption may exceed configured hard limit before the quota is enforced. The additional space consumption won't exceed either the lower of 1 GB or five percent of the configured hard limit.    
+ * After reaching the quota limit, if a user or administrator deletes files or directories to reduce quota usage under the limit, subsequent quota-consuming file operations may resume with a delay of up to five seconds.
+
+## Register the feature
+
+The feature to manage user and group quotas is currently in preview. Before using this feature for the first time, you need to register it.
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEnableVolumeUserGroupQuota
+ ```
+
+2. Check the status of the feature registration:
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEnableVolumeUserGroupQuota
+ ```
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+## Create new quota rules
+
+1. From the Azure portal, navigate to the volume for which you want to create a quota rule. Select **User and group quotas** in the navigation pane, then click **Add** to create a quota rule for a volume.
+
+ ![Screenshot that shows the New Quota window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-new-quota.png)
+
+2. In the **New quota** window that appears, provide information for the following fields, then click **Create**.
+
+ * **Quota rule name**:
+ The name must be unique within the volume.
+
+ * **Quota type**:
+ Select one of the following options. For details, see [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md).
+ * `Default user quota`
+ * `Default group quota`
+ * `Individual user quota`
+ * `Individual group quota`
+
+ * **Quota target**:
+ * NFS volumes:
+ For individual user quota and individual group quota, specify a value in the range of `0` to `4294967295`.
+ For default quota, specify the value as `""`.
+ * SMB volumes:
+ For individual user quota, specify the range in the `^S-1-[0-59]-\d{2}-\d{8,10}-\d{8,10}-\d{8,10}-[1-9]\d{3}` format.
+ * Dual-protocol volumes:
+ For individual user quota using the SMB protocol, specify the range in the `^S-1-[0-59]-\d{2}-\d{8,10}-\d{8,10}-\d{8,10}-[1-9]\d{3}` format.
+ For individual user quota using the NFS protocol, specify a value in the range of `0` to `4294967295`.
+
+ * **Quota limit**:
+ Specify the limit in the range of `4` to `1125899906842620`.
+ Select `KiB`, `MiB`, `GiB`, or `TiB` from the pulldown.
+
+## Edit or delete quota rules
+
+1. On the Azure portal, navigate to the volume whose quota rule you want to edit or delete. Select `…` at the end of the quota rule row, then select **Edit** or **Delete** as appropriate.
+
+ ![Screenshot that shows the Edit and Delete options of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-delete-edit.png)
+
+ 1. If you're editing a quota rule, update **Quota Limit** in the Edit User Quota Rule window that appears.
+
+ ![Screenshot that shows the Edit User Quota Rule window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-edit-rule.png)
+
+ 1. If you're deleting a quota rule, confirm the deletion by selecting **Yes**.
+
+ ![Screenshot that shows the Confirm Delete window of Users and Group Quotas.](../media/azure-netapp-files/user-group-quotas-confirm-delete.png)
+
+## Next steps
+* [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md)
+* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)
azure-netapp-files Volume Hard Quota Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-hard-quota-guidelines.md
# What changing to volume hard quota means for your Azure NetApp Files service
-From the beginning of the service, Azure NetApp Files has been using a capacity-pool provisioning and automatic growth mechanism. Azure NetApp Files volumes are thinly provisioned on an underlaying, customer-provisioned capacity pool of a selected tier and size. Volume sizes (quotas) are used to provide performance and capacity, and the quotas can be adjusted on-the-fly at any time. This behavior means that, currently, the volume quota is a performance lever used to control bandwidth to the volume. Currently, underlaying capacity pools automatically grow when the capacity fills up.
+From the beginning of the service, Azure NetApp Files has been using a capacity-pool provisioning and automatic growth mechanism. Azure NetApp Files volumes are thinly provisioned on an underlying, customer-provisioned capacity pool of a selected tier and size. Volume sizes (quotas) are used to provide performance and capacity, and the quotas can be adjusted on-the-fly at any time. This behavior means that, currently, the volume quota is a performance lever used to control bandwidth to the volume. Currently, underlaying capacity pools automatically grow when the capacity fills up.
> [!IMPORTANT] > The Azure NetApp Files behavior of volume and capacity pool provisioning will change to a *manual* and *controllable* mechanism. **Starting from April 30, 2021 (updated), volume sizes (quota) will manage bandwidth performance, as well as provisioned capacity, and underlying capacity pools will no longer grow automatically.**
Because of the volume hard quota change, you should change your operating model.
The volume hard quota change will result in changes in provisioned and available capacity for previously provisioned volumes and pools. As a result, some capacity allocation challenges might happen. To avoid short-term out-of-space situations for customers, the Azure NetApp Files team recommends the following, one-time corrective/preventative measures: * **Provisioned volume sizes**:
- Resize every provisioned volume to have appropriate buffer based on change rate and alerting or resize turnaround time (for example, 20% based on typical workload considerations), with a maximum of 100 TiB (which is the [volume size limit](azure-netapp-files-resource-limits.md#resource-limits)). This new volume size, including buffer capacity, should be based on the following factors:
+ Resize every provisioned volume to have appropriate buffer based on change rate and alerting or resize turnaround time (for example, 20% based on typical workload considerations), with a maximum of 100 TiB (which is the regular [volume size limit](azure-netapp-files-resource-limits.md#resource-limits). This new volume size, including buffer capacity, should be based on the following factors:
* **Provisioned** volume capacity, in case the used capacity is less than the provisioned volume quota. * **Used** volume capacity, in case the used capacity is more than the provisioned volume quota. There is no additional charge for volume-level capacity increase if the underlaying capacity pool does not need to be grown. As an effect of this change, you might observe a bandwidth limit *increase* for the volume (in case the [auto QoS capacity pool type](azure-netapp-files-understand-storage-hierarchy.md#qos_types) is used).
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 02/21/2023 Last updated : 02/23/2023 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
## February 2023
+* [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) (Preview)
+
+ Azure NetApp Files volumes provide flexible, large and scalable storage shares for applications and users. Storage capacity and consumption by users is only limited by the size of the volume. In some scenarios, you may want to limit this storage consumption of users and groups within the volume. With Azure NetApp Files volume and group quotas, you can now do so. User and/or group quotas enable you to restrict the storage space that a user or group can use within a specific Azure NetApp Files volume. You can choose to set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can set default (same for all users) or individual group quotas.
+
+* [Large volumes](large-volumes-requirements-considerations.md) (Preview)
+
+ Regular Azure NetApp Files volumes are limited to 100 TiB in size. Azure NetApp Files [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) break this barrier by enabling volumes of 100 TiB to 500 TiB in size. The large volumes capability enables a variety of use cases and workloads that require large volumes with a single directory namespace.
+
* [Customer-managed keys](configure-customer-managed-keys.md) (Preview) Azure NetApp Files volumes now support encryption with customer-managed keys and Azure Key Vault to enable an extra layer of security for data at rest.
cognitive-services Get Started Intent Recognition Clu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-intent-recognition-clu.md
+
+ Title: "Intent recognition with CLU quickstart - Speech service"
+
+description: In this quickstart, you recognize intents from audio data with the Speech service and Language service.
++++++ Last updated : 02/22/2023+
+zone_pivot_groups: programming-languages-set-thirteen
+keywords: intent recognition
++
+# Quickstart: Recognize intents with Conversational Language Understanding
++++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about speech recognition](how-to-recognize-speech.md)
cognitive-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md
Previously updated : 01/08/2022 Last updated : 02/22/2023 ms.devlang: cpp, csharp, java, javascript, python
keywords: intent recognition
# Quickstart: Recognize intents with the Speech service and LUIS
+> [!IMPORTANT]
+> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](/azure/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis) to [conversational language understanding](/azure/cognitive-services/language-service/conversational-language-understanding/overview) to benefit from continued product support and multilingual capabilities.
+>
+> Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
+ ::: zone pivot="programming-language-csharp" [!INCLUDE [C# include](includes/quickstarts/intent-recognition/csharp.md)] ::: zone-end
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
Pronunciation assessment results for the spoken word "hello" are shown as a JSON
} ```
+## Pronunciation assessment in streaming mode
+
+Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore` , and `CompletenessScore` will vary over time throughout the recording and evaluation process.
++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#:~:text=PronunciationAssessmentWithStream).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#:~:text=PronunciationAssessmentWithStream).
+++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/android/sdkdemo/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdkdemo/MainActivity.java#L548).
++++++
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessment.js).
++++++++ ## Next steps
+- Learn our quality [benchmark](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/speech-service-update-hierarchical-transformer-for-pronunciation/ba-p/3740866)
- Try out [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md)-- Try out the [pronunciation assessment demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.
+- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.
cognitive-services How To Use Custom Entity Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-custom-entity-pattern-matching.md
Last updated 11/15/2021 zone_pivot_groups: programming-languages-set-thirteen
In this guide, you use the Speech SDK to develop a console application that deri
## When to use pattern matching
-Use this sample code if:
-* You're only interested in matching strictly what the user said. These patterns match more aggressively than LUIS.
-* You don't have access to a [LUIS](../LUIS/index.yml) app, but still want intents.
-* You can't or don't want to create a [LUIS](../LUIS/index.yml) app but you still want some voice-commanding capability.
+Use pattern matching if:
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview).
+* You don't have access to a CLU model, but still want intents.
For more information, see the [pattern matching overview](./pattern-matching-overview.md).
cognitive-services How To Use Simple Language Pattern Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-simple-language-pattern-matching.md
In this guide, you use the Speech SDK to develop a C++ console application that
## When to use pattern matching
-Use this sample code if:
-* You're only interested in matching strictly what the user said. These patterns match more aggressively than LUIS.
-* You don't have access to a [LUIS](../LUIS/index.yml) app, but still want intents.
-* You can't or don't want to create a [LUIS](../LUIS/index.yml) app but you still want some voice-commanding capability.
+Use pattern matching if:
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview).
+* You don't have access to a CLU model, but still want intents.
For more information, see the [pattern matching overview](./pattern-matching-overview.md).
cognitive-services Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/intent-recognition.md
Previously updated : 10/13/2020 Last updated : 02/22/2023 keywords: intent recognition
keywords: intent recognition
In this overview, you will learn about the benefits and capabilities of intent recognition. The Cognitive Services Speech SDK provides two ways to recognize intents, both described below. An intent is something the user wants to do: book a flight, check the weather, or make a call. Using intent recognition, your applications, tools, and devices can determine what the user wants to initiate or do based on options you define in the Intent Recognizer or LUIS. ## Pattern matching
-The SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using LUIS or a combination of the two.
-## LUIS (Language Understanding Intent Service)
-The Microsoft LUIS service is available as a complete AI intent service that works well when your domain of possible intents is large and you are not really sure what the user will say. It supports many complex scenarios, intents, and entities.
+The Speech SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using LUIS or a combination of the two.
-### LUIS key required
+Use pattern matching if:
+* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview).
+* You don't have access to a CLU model, but still want intents.
-* LUIS integrates with the Speech service to recognize intents from speech. You don't need a Speech service subscription, just LUIS.
-* Speech intent recognition is integrated with the Speech SDK. You can use a LUIS key with the Speech service.
-* Intent recognition through the Speech SDK is [offered in a subset of regions supported by LUIS](./regions.md#intent-recognition).
+For more information, see the [pattern matching concepts](./pattern-matching-overview.md) and then:
+* Start with [simple pattern matching](how-to-use-simple-language-pattern-matching.md).
+* Improve your pattern matching by using [custom entities](how-to-use-custom-entity-pattern-matching.md).
-## Get started
-See this [how-to](how-to-use-simple-language-pattern-matching.md) to get started with pattern matching.
+## Conversational Language Understanding
-See this [quickstart](get-started-intent-recognition.md) to get started with LUIS intent recognition.
+Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it.
-## Sample code
+Both a Speech resource and Language resource are required to use CLU with the Speech SDK. The Speech resource is used to transcribe the user's speech into text, and the Language resource is used to recognize the intent of the utterance. To get started, see the [quickstart](get-started-intent-recognition-clu.md).
-Sample code for intent recognition:
+> [!IMPORTANT]
+> When you use conversational language understanding with the Speech SDK, you are charged both for the Speech-to-text recognition request and the Language service request for CLU. For more information about pricing for conversational language understanding, see [Language service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/).
-* [Quickstart: Use prebuilt Home automation app](../luis/luis-get-started-create-app.md)
-* [Recognize intents from speech using the Speech SDK for C#](./how-to-recognize-intents-from-speech-csharp.md)
-* [Intent recognition and other Speech services using Unity in C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/unity/speechrecognizer)
-* [Recognize intents using Speech SDK for Python](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/python/console)
-* [Intent recognition and other Speech services using the Speech SDK for C++ on Windows](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/cpp/windows/console)
-* [Intent recognition and other Speech services using the Speech SDK for Java on Windows or Linux](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/jre/console)
-* [Intent recognition and other Speech services using the Speech SDK for JavaScript on a web browser](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser)
+For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](/azure/cognitive-services/language-service/conversational-language-understanding/overview).
-## Reference docs
-
-* [Speech SDK](./speech-sdk.md)
+> [!IMPORTANT]
+> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](/azure/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis) to [conversational language understanding](/azure/cognitive-services/language-service/conversational-language-understanding/overview) to benefit from continued product support and multilingual capabilities.
+>
+> Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU.
## Next steps
-* [Intent recognition quickstart](get-started-intent-recognition.md)
-* [Get the Speech SDK](speech-sdk.md)
+* [Intent recognition with simple pattern matching](how-to-use-simple-language-pattern-matching.md)
+* [Intent recognition with CLU quickstart](get-started-intent-recognition-clu.md)
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
Pronunciation assessment uses the Speech-to-Text capability to provide subjectiv
Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input. - At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech. - At the word-level, pronunciation assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.-- Syllable-level accuracy scores are currently only available via the [JSON file](?tabs=json#scores-within-words) or [Speech SDK](how-to-pronunciation-assessment.md).
+- Syllable-level accuracy scores are currently available via the [JSON file](?tabs=json#pronunciation-assessment-results) or [Speech SDK](how-to-pronunciation-assessment.md).
- At the phoneme level, pronunciation assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech. This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
Follow these steps to assess your pronunciation of the reference text:
:::image type="content" source="media/pronunciation-assessment/pa-upload.png" alt-text="Screenshot of uploading recorded audio to be assessed."::: - ## Pronunciation assessment results Once you've recorded the reference text or uploaded the recorded audio, the **Assessment result** will be output. The result includes your spoken audio and the feedback on the accuracy and fluency of spoken audio, by comparing a machine generated transcript of the input audio with the reference text. You can listen to your spoken audio, and download it if necessary. You can also check the pronunciation assessment result in JSON. The word-level, syllable-level, and phoneme-level accuracy scores are included in the JSON file.
-### Overall scores
-
-Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score**. The **Accuracy score** and the **Fluency score** will vary over time throughout the recording process. The **Completeness score** is only calculated at the end of the evaluation. The **Pronunciation score** is overall score indicating the pronunciation quality of the given speech. During recording, the **Pronunciation score** is aggregated from **Accuracy score** and **Fluency score** with weight. Once completing recording, this overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
-
-**During recording**
--
-**Completing recording**
--
-### Scores within words
- ### [Display](#tab/display) The complete transcription is shown in the **Display** window. If a word is omitted, inserted, or mispronounced compared to the reference text, the word will be highlighted according to the error type. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes.
The complete transcription is shown in the `text` attribute. You can see accurac
+### Assessment scores in streaming mode
+
+Pronunciation Assessment supports uninterrupted streaming mode. The demo on the Speech Studio supports up to 60 minutes of recording in streaming mode for evaluation. The Speech Studio demo allows for up to 60 minutes of recording in streaming mode for evaluation. As long as you do not press the stop recording button, the evaluation process does not finish and you can pause and resume evaluation conveniently.
+
+Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score** as aggregated overall score which includes 3 sub aspects: **Accuracy score**, **Fluency score**, and **Completeness score**. In streaming mode, since the **Accuracy score**, **Fluency score and Completeness score** will vary over time throughout the recording process, we demonstrate an approach on Speech Studio to display approximate overall score incrementally before the end of the evaluation, which weighted only with Accuracy score and Fluency score. The **Completeness score** is only calculated at the end of the evaluation after you press the stop button, so the final overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
+Refer to the demo examples below for the whole process of evaluating pronunciation in streaming mode.
+
+**Start recording**
+
+As you start recording, the scores at the bottom begin to alter from 0.
++
+**During recording**
+
+During recording a long paragraph, you can pause recording at any time. You can continue to evaluate your recording as long as you don't press the stop button.
++
+**Finish recording**
+
+After you press the stop button, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score** at the bottom.
+ ## Next steps
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
A highly-natural custom neural voice depends on several factors, like the qualit
The quality of your training data is a primary factor. For example, in the same training set, consistent volume, speaking rate, speaking pitch, and speaking style are essential to create a high-quality custom neural voice. You should also avoid background noise in the recording and make sure the script and recording match. To ensure the quality of your data, you need to follow [script selection criteria](#script-selection-criteria) and [recording requirements](#recording-your-script).
-Regarding the size of the training data, in most cases you can build a reasonable custom neural voice with 500 utterances. According to our tests, adding more training data in most languages does not necessarily improve naturalness of the voice itself (tested using the MOS score), however, with more training data that covers more word instances, you have higher possibility to reduce the DSAT (dis-satisfied part of the speech, for example, the glitches) ratio for the voice.
+Regarding the size of the training data, in most cases you can build a reasonable custom neural voice with 500 utterances. According to our tests, adding more training data in most languages does not necessarily improve naturalness of the voice itself (tested using the MOS score), however, with more training data that covers more word instances, you have higher possibility to reduce the ratio of dissatisfactory parts of speech for the voice, such as the glitches. To hear what dissatisfactory parts of speech sound like, refer to [the GitHub examples](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/DSAT-examples.md).
In some cases, you may want a voice persona with unique characteristics. For example, a cartoon persona needs a voice with a special speaking style, or a voice that is very dynamic in intonation. For such cases, we recommend that you prepare at least 1000 (preferably 2000) utterances, and record them at a professional recording studio. To learn more about how to improve the quality of your voice model, see [characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context).
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
Previously updated : 04/22/2022 Last updated : 02/17/2023
This article contains a quick reference and a detailed description of the quotas and limits for the Speech service in Azure Cognitive Services. The information applies to all [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) of the service. It also contains some best practices to avoid request throttling.
+For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+ ## Quotas and limits reference
-The following sections provide you with a quick guide to the quotas and limits that apply to Speech service.
+The following sections provide you with a quick guide to the quotas and limits that apply to the Speech service.
+
+For information about adjustable quotas for Standard (S0) Speech resources, see [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). The quotas and limits for Free (F0) Speech resources aren't adjustable.
### Speech-to-text quotas and limits per resource
-In the following tables, the parameters without the **Adjustable** row aren't adjustable for all price tiers.
+This section describes speech-to-text quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable.
-#### Online transcription
+#### Online transcription and speech translation
You can use online transcription with the [Speech SDK](speech-sdk.md) or the [speech-to-text REST API for short audio](rest-speech-to-text-short.md).
-| Quota | Free (F0)<sup>1</sup> | Standard (S0) |
+> [!IMPORTANT]
+> These limits apply to concurrent speech-to-text online transcription requests and speech translation requests combined. For example, if you have 60 concurrent speech-to-text requests and 40 concurrent speech translation requests, you'll reach the limit of 100 concurrent requests.
+
+| Quota | Free (F0) | Standard (S0) |
|--|--|--|
-| Concurrent request limit - base model endpoint | 1 | 100 (default value) |
-| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> |
-| Concurrent request limit - custom endpoint | 1 | 100 (default value) |
-| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> |
+| Concurrent request limit - base model endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). |
+| Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). |
#### Batch transcription
-| Quota | Free (F0)<sup>1</sup> | Standard (S0) |
+| Quota | Free (F0) | Standard (S0) |
|--|--|--| | [Speech-to-text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute | | Max audio input file size | N/A | 1 GB |
You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp
#### Model customization
-| Quota | Free (F0)<sup>1</sup> | Standard (S0) |
+The limits in this table apply per Speech resource when you create a Custom Speech model.
+
+| Quota | Free (F0) | Standard (S0) |
|--|--|--| | REST API limit | 300 requests per minute | 300 requests per minute | | Max number of speech datasets | 2 | 500 |
You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp
| Max pronunciation dataset file size for data import | 1 KB | 1 MB | | Max text size when you're using the `text` parameter in the [Models_Create](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create/) API request | 200 KB | 500 KB |
-<sup>1</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
-<sup>2</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit).<br/>
+### Text-to-speech quotas and limits per resource
-### Text-to-speech quotas and limits per Speech resource
+This section describes text-to-speech quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable.
-In the following tables, the parameters without the **Adjustable** row aren't adjustable for all price tiers.
+#### Common text-to-speech quotas and limits
-#### General
-
-| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
+| Quota | Free (F0) | Standard (S0) |
|--|--|--|
-| **Max number of transactions per certain time period** | | |
-| Real-time API. Prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds | 200 transactions per second (TPS) (default value) |
-| Adjustable | No<sup>4</sup> | Yes<sup>5</sup>, up to 1000 TPS |
-| **HTTP-specific quotas** | | |
+| Maximum number of transactions per time period for prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds<br/><br/>This limit isn't adjustable. | 200 transactions per second (TPS) (default value)<br/><br/>The rate is adjustable up to 1000 TPS for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit). |
| Max audio length produced per request | 10 min | 10 min | | Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
-| **Websocket specific quotas** | | |
-| Max audio length produced per turn | 10 min | 10 min |
-| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
-| Max SSML message size per turn | 64 KB | 64 KB |
+| Max SSML message size per turn for websocket | 64 KB | 64 KB |
#### Custom Neural Voice
-| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
+| Quota | Free (F0)| Standard (S0) |
|--|--|--|
-| Max number of transactions per second (TPS) | Not available for F0 | See [General](#general) |
+| Max number of transactions per second (TPS) | Not available for F0 | 200 transactions per second (TPS) (default value) |
| Max number of datasets | N/A | 500 | | Max number of simultaneous dataset uploads | N/A | 5 | | Max data file size for data import per dataset | N/A | 2 GB |
In the following tables, the parameters without the **Adjustable** row aren't ad
| File size | 3,000 characters per file | 20,000 characters per file | | Export to audio library | 1 concurrent task | N/A |
-<sup>3</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
-<sup>4</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices) and [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).<br/>
-<sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit).<br/>
+### Speaker recognition quotas and limits per resource
+
+Speaker recognition is limited to 20 transactions per second (TPS).
## Detailed description, quota adjustment, and best practices
+Some of the Speech service quotas are adjustable. This section provides additional explanations, best practices, and adjustment instructions.
+
+The following quotas are adjustable for Standard (S0) resources. The Free (F0) request limits aren't adjustable.
+
+- Speech-to-text [concurrent request limit](#online-transcription-and-speech-translation) for base model endpoint and custom endpoint
+- Text-to-speech [maximum number of transactions per time period](#text-to-speech-quotas-and-limits-per-resource) for prebuilt neural voices and custom neural voices
+- Speech translation [concurrent request limit](#online-transcription-and-speech-translation)
+ Before requesting a quota increase (where applicable), ensure that it's necessary. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity. Let's look at an example. Suppose that your application receives response code 429, which indicates that there are too many requests. Your application receives this response even though your workload is within the limits defined by the [Quotas and limits reference](#quotas-and-limits-reference). The most likely explanation is that Speech service is scaling up to your demand and didn't reach the required scale yet. Therefore the service doesn't immediately have enough resources to serve the request. In most cases, this throttled state is transient.
The next sections describe specific cases of adjusting quotas.
### Speech-to-text: increase online transcription concurrent request limit
-By default, the number of concurrent requests is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
+By default, the number of concurrent speech-to-text [online transcription requests and speech translation requests](#online-transcription-and-speech-translation) combined is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling.
>[!NOTE]
-> If you use custom models, be aware that one Speech service resource might be associated with many custom endpoints hosting many custom model deployments. Each custom endpoint has the default limit of concurrent requests (100) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint *separately*. Note also that the value of the limit of concurrent requests for the base model of a resource has *no* effect to the custom endpoints associated with this resource.
-
-Increasing the limit of concurrent requests doesn't directly affect your costs. Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
+> Concurrent request limits for base and custom models need to be adjusted separately. You can have a Speech service resource that's associated with many custom endpoints hosting many custom model deployments. As needed, the limit adjustments per custom endpoint must be requested separately.
-Concurrent request limits for base and custom models need to be adjusted separately.
+Increasing the limit of concurrent requests doesn't directly affect your costs. The Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests.
You aren't able to see the existing value of the concurrent request limit parameter in the Azure portal, the command-line tools, or API requests. To verify the existing value, create an Azure support request.
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Below is a sample command to set file/directory ownership.
```bash sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... ```+ ## Usage records When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST endpoint to generate a report about service usage.
cognitive-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md
keywords:
# Content filtering
-Azure OpenAI Service includes a content management system that works alongside core models to filter content. This system works by running both the input prompt and generated content through an ensemble of classification models aimed at detecting misuse. If the system identifies harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the finish_reason on the response will be `content_filter` to signify that some of the generation was filtered.
-
->[!NOTE]
->This content filtering system is temporarily turned off while we work on some improvements. The internal system is still annotating harmful content but the models will not block. Content filtering will be reactivated with the release of upcoming updates. If you would like to enable the content filters at any point before that, please open an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-
-You can generate content with the completions API using many different configurations that will alter the filtering behavior you should expect. The following section aims to enumerate all of these scenarios for you to appropriately design your solution.
+Azure OpenAI Service includes a content management system that works alongside core models to filter content. This system works by running both the input prompt and generated content through an ensemble of classification models aimed at detecting misuse. If the system identifies harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the finish_reason on the response will be `content_filter` to signify that some of the generation was filtered. You can generate content with the completions API using many different configurations that will alter the filtering behavior you should expect. The following section aims to enumerate all of these scenarios for you to appropriately design your solution.
To ensure you have properly mitigated risks in your application, you should evaluate all potential harms carefully, follow guidance in the [Transparency Note](https://go.microsoft.com/fwlink/?linkid=2200003) and add scenario-specific mitigation as needed.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
Title: Azure OpenAI Service models
-description: Learn about the different models that are available in Azure OpenAI.
+description: Learn about the different model capabilities that are available with Azure OpenAI.
Previously updated : 06/24/2022 Last updated : 02/13/2023
keywords:
# Azure OpenAI Service models
-The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Please refer to the capability table at the bottom for a full breakdown.
+Azure OpenAI provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Refer to the [model capability table](#model-capabilities) in this article for a full breakdown.
| Model family | Description | |--|--|
The service provides access to many different models, grouped by family and capa
## Model capabilities
-Each model family has a series of models that are further distinguished by capability. These capabilities are typically identified by names, and the alphabetical order of these names generally signifies the relative capability and cost of that model within a given model family. For example, GPT-3 models use names such as Ada, Babbage, Curie, and Davinci to indicate relative capability and cost. Davinci is more capable (at a higher cost) than Curie, which in turn is more capable (at a higher cost) than Babbage, and so on.
+Each model family has a series of models that are further distinguished by capability. These capabilities are typically identified by names, and the alphabetical order of these names generally signifies the relative capability and cost of that model within a given model family. For example, GPT-3 models use names such as Ada, Babbage, Curie, and Davinci to indicate relative capability and cost. Davinci is more capable and more expensive than Curie, which in turn is more capable and more expensive than Babbage, and so on.
> [!NOTE] > Any task that can be performed by a less capable model like Ada can be performed by a more capable model like Curie or Davinci. ## Naming convention
-Azure OpenAI's model names typically correspond to the following standard naming convention:
+Azure OpenAI model names typically correspond to the following standard naming convention:
`{family}-{capability}[-{input-type}]-{identifier}`
Azure OpenAI's model names typically correspond to the following standard naming
For example, our most powerful GPT-3 model is called `text-davinci-003`, while our most powerful Codex model is called `code-davinci-002`.
-> Older versions of the GPT-3 models are available, named `ada`, `babbage`, `curie`, and `davinci`. These older models do not follow the standard naming conventions, and they are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md).
+> The older versions of GPT-3 models named `ada`, `babbage`, `curie`, and `davinci` that don't follow the standard naming convention are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md).
## Finding what models are available
-You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](/rest/api/cognitiveservices/azureopenaistable/models/list).
+You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list).
## Finding the right model
-We recommend starting with the most capable model in a model family because it's the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities.
+We recommend starting with the most capable model in a model family to confirm whether the model capabilities meet your requirements. Then you can stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities.
## GPT-3 models
-The GPT-3 models can understand and generate natural language. The service offers four model capabilities, each with different levels of power and speed suitable for different tasks. Davinci is the most capable model, while Ada is the fastest. The following list represents the latest versions of GPT-3 models, ordered by increasing capability.
+The GPT-3 models can understand and generate natural language. The service offers four model capabilities, each with different levels of power and speed suitable for different tasks. Davinci is the most capable model, while Ada is the fastest. In the order of greater to lesser capability, the models are:
-- `text-ada-001`-- `text-babbage-001`-- `text-curie-001` - `text-davinci-003`
+- `text-curie-001`
+- `text-babbage-001`
+- `text-ada-001`
-While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application.
+While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it produces the best results and validate the value that Azure OpenAI can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application.
### <a id="gpt-3-davinci"></a>Davinci
Ada is usually the fastest model and can perform tasks like parsing text, addres
The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.
-TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. The following list represents the latest versions of Codex models, ordered by increasing capability.
+TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and Shell. In the order of greater to lesser capability, the Codex models are:
-- `code-cushman-001` - `code-davinci-002`
+- `code-cushman-001`
### <a id="codex-davinci"></a>Davinci
-Similar to GPT-3, Davinci is the most capable Codex model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, Davinci produces the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as other models.
+Similar to GPT-3, Davinci is the most capable Codex model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, Davinci produces the best results. Greater capabilities require more compute resources, so Davinci costs more and isn't as fast as other models.
### Cushman
Similar to text search embedding models, there are two input types supported by
||| | Code search and relevance | `code-search-ada-code-001` <br> `code-search-ada-text-001` <br> `code-search-babbage-code-001` <br> `code-search-babbage-text-001` |
-When using our Embeddings models, keep in mind their limitations and risks.
+When using our embeddings models, keep in mind their limitations and risks.
## Model Summary table and region availability ### GPT-3 Models
-| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
| | | | | |
-| Ada | Yes | No | N/A | East US, South Central US, West Europe |
-| Text-Ada-001 | Yes | No | East US, South Central US, West Europe | N/A |
-| Babbage | Yes | No | N/A | East US, South Central US, West Europe |
-| Text-Babbage-001 | Yes | No | East US, South Central US, West Europe | N/A |
-| Curie | Yes | No | N/A | East US, South Central US, West Europe |
-| Text-curie-001 | Yes | No | East US, South Central US, West Europe | N/A |
-| Davinci* | Yes | No | N/A | East US, South Central US, West Europe |
-| Text-davinci-001 | Yes | No | South Central US, West Europe | N/A |
-| Text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A |
-| Text-davinci-003 | Yes | No | East US | N/A |
-| Text-davinci-fine-tune-002* | Yes | No | N/A | East US, West Europe |
-
-\*Models available by request only. We are currently unable to onboard new customers at this time.
+| ada | Yes | No | N/A | East US, South Central US, West Europe |
+| text-ada-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| babbage | Yes | No | N/A | East US, South Central US, West Europe |
+| text-babbage-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| curie | Yes | No | N/A | East US, South Central US, West Europe |
+| text-curie-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| davinci<sup>1</sup> | Yes | No | N/A | East US, South Central US, West Europe |
+| text-davinci-001 | Yes | No | South Central US, West Europe | N/A |
+| text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A |
+| text-davinci-003 | Yes | No | East US | N/A |
+| text-davinci-fine-tune-002<sup>1</sup> | Yes | No | N/A | East US, West Europe |
+
+<sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model.
### Codex Models
-| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
| | | | | |
-| Code-Cushman-001* | Yes | No | South Central US, West Europe | East US, South Central US, West Europe |
-| Code-Davinci-002 | Yes | No | East US, West Europe | N/A |
-| Code-Davinci-Fine-tune-002* | Yes | No | N/A | East US, West Europe |
-
-\*Models available for Fine-tuning by request only. We are currently unable to enable new cusetomers at this time.
-
+| code-cushman-001<sup>2</sup> | Yes | No | South Central US, West Europe | East US, South Central US, West Europe |
+| code-davinci-002 | Yes | No | East US, West Europe | N/A |
+| code-davinci-fine-tune-002<sup>2</sup> | Yes | No | N/A | East US, West Europe |
+<sup>2</sup> The model is available for fine-tuning by request only. Currently we aren't accepting new requests to fine-tune the model.
### Embeddings Models
-| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
| | | | | | | text-ada-embeddings-002 | No | Yes | East US, South Central US, West Europe | N/A | | text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A |
When using our Embeddings models, keep in mind their limitations and risks.
| code-search-babbage-code-001 | No | Yes | South Central US, West Europe | N/A | | code-search-babbage-text-001 | No | Yes | South Central US, West Europe | N/A | - ## Next steps
-[Learn more about Azure OpenAI](../overview.md).
+[Learn more about Azure OpenAI](../overview.md)
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
Last updated 12/01/2021 -+ # Calling capabilities supported for Teams users in Calling SDK
communication-services Meeting Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md
Last updated 12/01/2021 -+ # Teams meeting support for Teams user in Calling SDK
communication-services Phone Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md
Last updated 12/01/2021 -+ # Phone capabilities for Teams user in Calling SDK
communication-services Teams Interop Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing/teams-interop-pricing.md
Last updated 08/01/2022 + # Teams interoperability pricing
communication-services Manage Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/cte-calling-sdk/manage-calls.md
description: Use Azure Communication Services SDKs to manage calls for Teams use
-+ Last updated 12/01/2021
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/meeting-interop.md
Last updated 06/30/2021 + zone_pivot_groups: acs-plat-web-ios-android-windows
communication-services Access Token Teams External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/access-token-teams-external-users.md
Last updated 08/05/2022 -+ zone_pivot_groups: acs-azcli-js-csharp-java-python
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
Last updated 06/30/2021 -+ zone_pivot_groups: acs-js-csharp-java-python
communication-services Get Started Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md
Last updated 06/30/2021 -+ zone_pivot_groups: acs-plat-web-ios-android-windows
communication-services Get Started With Voice Video Calling Custom Teams Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md
Last updated 12/1/2021 -+
communication-services Virtual Visits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md
Last updated 05/24/2022 +
communications-gateway Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability.md
Azure Communications Gateway provides all the features of a traditional session
- Defending against Denial of Service attacks and other malicious traffic - Ensuring Quality of Service
-Azure Communications Gateway also offers dashboards that you can use to monitor key metrics of your deployment.
+Azure Communications Gateway also offers metrics for monitoring your deployment.
You must provide the networking connection between Azure Communications Gateway and your core networks. For Teams Phone Mobile, you must also provide a network element that can route calls into the Microsoft Phone System for call anchoring.
For full details of the media interworking features available in Azure Communica
## Compatibility with monitoring requirements
-The Azure Communications Gateway service includes continuous monitoring for potential faults in your deployment. The metrics we monitor cover all metrics required to be monitored by Operators as part of the Operator Connect program and include:
+The Azure Communications Gateway service includes continuous monitoring for potential faults in your deployment. The metrics we monitor cover all metrics that operators must monitor as part of the Operator Connect program and include:
- Call quality - Call errors and unusual behavior (for example, call setup failures, short calls, or unusual disconnections)
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
WITH (num varchar(100)) AS [IntToFloat]
* Spark pools in Azure Synapse will represent these columns as `undefined`. * SQL serverless pools in Azure Synapse will represent these columns as `NULL`.
-##### Representation challenges Workaround
+##### Representation challenges workarounds
-Currently the base schema can't be reset and It is possible that an old document, with an incorrect schema, was used to create that base schema. To delete or update the problematic documents won't help. The possible solutions are:
+It is possible that an old document, with an incorrect schema, was used to create your container's analytical store base schema. Based on all the rules presented above, you may be receiving `NULL` for certain properties when querying your analytical store using Azure Synapse Link. To delete or update the problematic documents won't help because base schema reset isn't currently supported. The possible solutions are:
* To migrate the data to a new container, making sure that all documents have the correct schema.
- * To abandon the property with the wrong schema and add a new one, with another name, that has the correct datatypes. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have **NULL**. You can add the **status2** property to all documents and start to use it, instead of the original property.
+ * To abandon the property with the wrong schema and add a new one, with another name, that has the correct schema in all documents. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have `NULL`. You can add the **status2** property to all documents and start to use it, instead of the original property.
#### Full fidelity schema representation
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Given the internal Azure Cosmos DB architecture, using multiple write regions do
When an Azure Cosmos DB account is configured with multi-region writes, one of the regions will act as an arbiter in case of write conflicts. When such conflicts happen, they're routed to this region for consistent resolution.
+#### Best practices for multi-region writes
+
+Here are some best practices to consider when writing to multiple regions.
+
+#### Keep local traffic local
+
+When you use multi-region writes, the application should issue read and write traffic originating in the local region, strictly to the local Cosmos DB region. You must avoid cross-region calls for optimal performance.
+
+It's important for the application to minimize conflicts by avoiding the following anti-patterns:
+* Sending the same write operation to all regions to hedge bets on response times from the fastest region.
+
+* Randomly determining the target region for a read or write operation on a per request basis.
+
+* Using a Round Robin policy to determine the target region for a read or write operation on a per request basis.
+
+#### Avoid dependency on replication lag
+Multi-region write accounts can't be configured for Strong Consistency. Thus, the region being written to responds immediately after replicating the data locally while asynchronously replicating the data globally.
+
+While infrequent, a replication lag may occur on one or a few partitions when geo-replicating data. Replication lag can occur due to rare blips in network traffic or higher than usual rates of conflict resolution.
+
+For instance, an architecture in which the application writes to Region A but reads from Region B introduces a dependency on replication lag between the two regions. However, if the application reads and writes to the same region, performance remains constant even in the presence of replication lag.
+
+#### Session Consistency Usage for Write operations
+In Session Consistency, the session token is used for both read and write operations.
+
+For read operations, the cached session token is sent to the server with a guarantee of receiving data corresponding to the specified (or a more recent) session token.
+
+For write operations, the session token is sent to the database with a guarantee of persisting the data only if the server has caught up to the session token provided. In single-region write accounts, the write region is always guaranteed to have caught up to the session token. However, in multi-region write accounts, the region you write to may not have caught up to writes issued to another region. If the client writes to Region A with a session token from Region B, Region A won't be able to persist the data until it has caught up to changes made in Region B.
+
+It's best to use session tokens only for read operations and not for write operations when passing session tokens between client instances.
+
+#### Rapid updates to the same document
+The server's updates to resolve or confirm the absence of conflicts can collide with writes triggered by the application when the same document is repeatedly updated. Repeated updates in rapid succession to the same document experience higher latencies during conflict resolution. While occasional bursts in repeated updates to the same document are inevitable, it would be worth exploring an architecture where new documents are created instead if steady state traffic sees rapid updates to the same document over an extended period.
+ ### What to expect during a region outage Client of single-region accounts will experience loss of read and write availability until service is restored.
-Multi-region accounts will experience different behaviors depending on the following table.
+Multi-region accounts experience different behaviors depending on the following table.
| Configuration | Outage | Availability impact | Durability impact| What to do | | -- | -- | -- | -- | -- |
-| Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency which loses write availability until the service is restored or, if **service-managed failover** is enabled, the region is marked as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-managed failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
-| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Azure Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for API for NoSQL accounts, and Last Write Wins for accounts using other APIs. |
+| Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency, which loses write availability until restoration of the service or, if you enable **service-managed failover**, the service marks the region as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> When the outage is over, readjust provisioned RUs as appropriate. |
+| Single write region | Write region outage | Clients will redirect reads to other regions. <br/> **Without service-managed failover**, clients experience write availability loss, until restoration of write availability occurs automatically when the outage ends. <br/> **With service-managed failover** clients experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If you haven't selected the strong consistency level, the service may not replicate some data to the remaining active regions. This replication depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, you could lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> *Don't* trigger a manual failover during the outage, as it can't succeed. <br/> When the outage is over, readjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, you may lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/> When the outage is over, you may readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB automatically recovers non-replicated data in the failed region. This automatic recovery uses the configured conflict resolution method for API for NoSQL accounts. For accounts other APIs, this automatic recovery uses *Last Write Wins*. |
### Additional information on read region outages
Multi-region accounts will experience different behaviors depending on the follo
* If none of the regions in the preferred region list is available, calls automatically fall back to the current write region.
-* No changes are required in your application code to handle read region outage. When the impacted read region is back online it will automatically sync with the current write region and will be available again to serve read requests.
+* No changes are required in your application code to handle read region outage. When the impacted read region is back online, it will automatically sync with the current write region and will be available again to serve read requests.
* Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, read consistency guarantees continue to be honored by Azure Cosmos DB.
Multi-region accounts will experience different behaviors depending on the follo
### Additional information on write region outages
-* During a write region outage, the Azure Cosmos DB account will automatically promote a secondary region to be the new primary write region when **automatic (service-managed) failover** is configured on the Azure Cosmos DB account. The failover will occur to another region in the order of region priority you've specified.
+* During a write region outage, the Azure Cosmos DB account will automatically promote a secondary region to be the new primary write region when **automatic (service-managed) failover** is configured on the Azure Cosmos DB account. The failover occurs to another region in the order of region priority you've specified.
-* Note that manual failover shouldn't be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure which requires connectivity between the regions.
+* Manual failover shouldn't be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure, which requires connectivity between the regions.
* When the previously impacted region is back online, any write data that wasn't replicated when the region failed, is made available through the [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflicts feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos DB container as appropriate.
The following table summarizes the high availability capability of various accou
|Read availability SLA | 99.99% | 99.995% | 99.999% | 99.999% | 99.999% | |Zone failures ΓÇô data loss | Data loss | No data loss | No data loss | No data loss | No data loss | |Zone failures ΓÇô availability | Availability loss | No availability loss | No availability loss | No availability loss | No availability loss |
-|Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information.
+|Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md). | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md). | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md).
|Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss for read region failure, temporary for write region failure | No read availability loss, temporary write availability loss in the affected region | |Price (***1***) | N/A | Provisioned RU/s x 1.25 rate | Provisioned RU/s x n regions | Provisioned RU/s x 1.25 rate x n regions (***2***) | Multi-region write rate x n regions |
Multi-region accounts will experience different behaviors depending on the follo
| Write regions | Service-Managed failover | What to expect | What to do | | -- | -- | -- | -- |
-| Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Azure Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Azure Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
-| Multiple write regions | Not applicable | Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Azure Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for API for NoSQL accounts, and Last Write Wins for accounts using other APIs. |
+| Single write region | Not enabled | If there was an outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can affect write availability if fewer than two read regions remaining.<br/> If there was an outage in the write region, clients experience write availability loss. If you haven't selected the strong consistency level, the service may not replicate some data to the remaining active regions. This replication depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, you may lose unreplicated data. <br/> Azure Cosmos DB restores write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> *Don't* trigger a manual failover during the outage, as it can't succeed. <br/> When the outage is over, readjust provisioned RUs as appropriate. |
+| Single write region | Enabled | If there was an outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can affect write availability if fewer than two read regions remaining.<br/> If there was an outage in the write region, clients experience write availability loss until Azure Cosmos DB automatically elects a new region as the new write region according to your preferences. If you haven't selected the strong consistency level, the service may not replicate some data to the remaining active regions. This replication depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, you may lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> *Don't* trigger a manual failover during the outage, as it can't succeed. <br/> When the outage is over, you may move the write region back to the original region, and readjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Multiple write regions | Not applicable | Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15 mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, you may lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/> When the outage is over, you may readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB automatically recovers non-replicated data in the failed region. This automatic recovery uses the configured conflict resolution method for API for NoSQL accounts. For accounts other APIs, this automatic recovery uses *Last Write Wins*. |
## Next steps
cosmos-db Performance Tips Query Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md
To execute a query, a query plan needs to be built. This in general represents a
### Use Query Plan caching
-The query plan, for a query scoped to a single partition, is cached on the client. This eliminates the need to make a call to the gateway to retrieve the query plan after the first call. The key for the cached query plan is the SQL query string. You need to **make sure the query is [parametrized](query/parameterized-queries.md)**. If not, the query plan cache lookup will often be a cache miss as the query string is unlikely to be identical across calls. Query plan caching is **enabled by default for Java SDK version 4.20.0 and above** and **for Spring Datan Azure Cosmos DB SDK version 3.13.0 and above**.
+The query plan, for a query scoped to a single partition, is cached on the client. This eliminates the need to make a call to the gateway to retrieve the query plan after the first call. The key for the cached query plan is the SQL query string. You need to **make sure the query is [parametrized](query/parameterized-queries.md)**. If not, the query plan cache lookup will often be a cache miss as the query string is unlikely to be identical across calls. Query plan caching is **enabled by default for Java SDK version 4.20.0 and above** and **for Spring Data Azure Cosmos DB SDK version 3.13.0 and above**.
### Use parametrized single partition queries
cosmos-db Quickstart Java Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java-spring-data.md
Title: Quickstart - Use Spring Datan Azure Cosmos DB v3 to create a document database using Azure Cosmos DB
-description: This quickstart presents a Spring Datan Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
+ Title: Quickstart - Use Spring Data Azure Cosmos DB v3 to create a document database using Azure Cosmos DB
+description: This quickstart presents a Spring Data Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB for NoSQL
ms.devlang: java Previously updated : 08/26/2021 Last updated : 02/22/2023
-# Quickstart: Build a Spring Datan Azure Cosmos DB v3 app to manage Azure Cosmos DB for NoSQL data
+# Quickstart: Build a Spring Data Azure Cosmos DB v3 app to manage Azure Cosmos DB for NoSQL data
+ [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!div class="op_single_selector"]
> * [Python](quickstart-python.md) > * [Spark v3](quickstart-spark.md) > * [Go](quickstart-go.md)
->
-In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and by using a Spring Datan Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB for NoSQL account using the Azure portal or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Spring Boot app using the Spring Datan Azure Cosmos DB v3 connector, and then add resources to your Azure Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub.
+
+First, you create an Azure Cosmos DB for NoSQL account using the Azure portal. Alternately, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). You can then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Azure Cosmos DB account by using the Spring Boot application.
+
+Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
-> [!IMPORTANT]
-> These release notes are for version 3 of Spring Datan Azure Cosmos DB. You can find [release notes for version 2 here](sdk-java-spring-data-v2.md).
+> [!IMPORTANT]
+> These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find release notes for version 2 at [Spring Data Azure Cosmos DB v2 for API for NoSQL (legacy): Release notes and resources](sdk-java-spring-data-v2.md).
>
-> Spring Datan Azure Cosmos DB supports only the API for NoSQL.
+> Spring Data Azure Cosmos DB supports only the API for NoSQL.
+>
+> See the following articles for information about Spring Data on other Azure Cosmos DB APIs:
>
-> See these articles for information about Spring Data on other Azure Cosmos DB APIs:
> * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db) > * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)
->
## Prerequisites -- An Azure account with an active subscription.
- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.-- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.-- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
+* An Azure account with an active subscription.
+ * No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
+* [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Set the `JAVA_HOME` environment variable to the JDK install folder.
+* A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.
+* [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.
## Introductory notes
-*The structure of an Azure Cosmos DB account.* Irrespective of API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below:
+*The structure of an Azure Cosmos DB account.* Irrespective of API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the following diagram:
:::image type="content" source="../media/account-databases-containers-items/cosmos-entities.png" alt-text="Azure Cosmos DB account entities" border="false":::
-You may read more about databases, containers and items [here.](../account-databases-containers-items.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
+For more information about databases, containers, and items, see [Azure Cosmos DB resource model](../account-databases-containers-items.md). A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*.
+
+The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. You can select provisioned throughput at per-container granularity or per-database granularity. However, you should prefer container-level throughput specification. For more information, see [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md).
-The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md)
+As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*. You must choose one field in your documents to be the partition key, which maps each document to a partition.
-As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly-distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md).
+The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values. For this reason, you should choose a partition key that's relatively random or evenly distributed. Otherwise, you get *hot partitions* and *cold partitions*, which see substantially more or fewer requests. For information on avoiding this condition, see [Partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md).
## Create a database account
-Before you can create a document database, you need to create a API for NoSQL account with Azure Cosmos DB.
+Before you can create a document database, you need to create an API for NoSQL account with Azure Cosmos DB.
[!INCLUDE [cosmos-db-create-dbaccount](../includes/cosmos-db-create-dbaccount.md)]
Before you can create a document database, you need to create a API for NoSQL ac
## Clone the sample application
-Now let's switch to working with code. Let's clone a API for NoSQL app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+Now let's switch to working with code. Let's clone an API for NoSQL app from GitHub, set the connection string, and run it.
Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer. ```bash
-git clone https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started.git
+git clone https://github.com/Azure-Samples/azure-spring-boot-samples.git
``` ## Review the code
-This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app
-](#run-the-app).
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app](#run-the-app).
+
+### [Passwordless (Recommended)](#tab/passwordless)
+
+In this section, the configurations and the code don't have any authentication operations. However, connecting to Azure service requires authentication. To complete the authentication, you need to use Azure Identity. Spring Cloud Azure uses `DefaultAzureCredential`, which Azure Identity provides to help you get credentials without any code changes.
+
+`DefaultAzureCredential` supports multiple authentication methods and determines which method to use at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. For more information, see the [Default Azure credential](/azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential) section of [Authenticate Azure-hosted Java applications](/azure/developer/java/sdk/identity-azure-hosted-auth).
++
+### Authenticate using DefaultAzureCredential
++
+You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` automatically discovers and uses the account you signed in with in the previous step.
### Application configuration file
-Here we showcase how Spring Boot and Spring Data enhance user experience - the process of establishing an Azure Cosmos DB client and connecting to Azure Cosmos DB resources is now config rather than code. At application startup Spring Boot handles all of this boilerplate using the settings in **application.properties**:
+Configure the Azure Database for MySQL credentials in the *application.yml* configuration file in the *cosmos/spring-cloud-azure-starter-data-cosmos/spring-cloud-azure-data-cosmos-sample* directory. Replace the values of `${AZURE_COSMOS_ENDPOINT}` and `${COSMOS_DATABASE}`.
+
+```yaml
+spring:
+ cloud:
+ azure:
+ cosmos:
+ endpoint: ${AZURE_COSMOS_ENDPOINT}
+ database: ${COSMOS_DATABASE}
+```
+
+After Spring Boot and Spring Data create the Azure Cosmos DB account, database, and container, they connect to the database and container for `delete`, `add`, and `find` operations.
+
+### [Password](#tab/password)
+
+### Application configuration file
-```xml
-cosmos.uri=${ACCOUNT_HOST}
-cosmos.key=${ACCOUNT_KEY}
-cosmos.secondaryKey=${SECONDARY_ACCOUNT_KEY}
+The following section shows how Spring Boot and Spring Data use configuration instead of code to establish an Azure Cosmos DB client and connect to Azure Cosmos DB resources. At application startup Spring Boot handles all of this boilerplate using the following settings in *application.yml*:
-dynamic.collection.name=spel-property-collection
-# Populate query metrics
-cosmos.queryMetricsEnabled=true
+```yaml
+spring:
+ cloud:
+ azure:
+ cosmos:
+ key: ${AZURE_COSMOS_KEY}
+ endpoint: ${AZURE_COSMOS_ENDPOINT}
+ database: ${COSMOS_DATABASE}
```
-Once you create an Azure Cosmos DB account, database, and container, just fill-in-the-blanks in the config file and Spring Boot/Spring Data will automatically do the following: (1) create an underlying Java SDK `CosmosClient` instance with the URI and key, and (2) connect to the database and container. You're all set - **no more resource management code!**
+Once you create an Azure Cosmos DB account, database, and container, just fill-in-the-blanks in the config file and Spring Boot/Spring Data does the following: (1) creates an underlying Java SDK `CosmosClient` instance with the URI and key, and (2) connects to the database and container. You're all set - no more resource management code!
++ ### Java source
-The Spring Data value-add also comes from its simple, clean, standardized and platform-independent interface for operating on datastores. Building on the Spring Data GitHub sample linked above, below are CRUD and query samples for manipulating Azure Cosmos DB documents with Spring Datan Azure Cosmos DB.
+Spring Data provides a simple, clean, standardized, and platform-independent interface for operating on datastores, as shown in the following examples. These CRUD and query examples enable you to manipulate Azure Cosmos DB documents by using Spring Data Azure Cosmos DB. These examples build on the Spring Data GitHub sample linked to earlier in this article.
* Item creation and updates by using the `save` method.
- [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Create)]
-
-* Point-reads using the derived query method defined in the repository. The `findByIdAndLastName` performs point-reads for `UserRepository`. The fields mentioned in the method name cause Spring Data to execute a point-read defined by the `id` and `lastName` fields:
+ ```java
+ // Save the User class to Azure Cosmos DB database.
+ final Mono<User> saveUserMono = repository.save(testUser);
+ ```
- [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Read)]
+* Point-reads using the derived query method defined in the repository. The `findById` performs point-reads for `repository`. The fields mentioned in the method name cause Spring Data to execute a point-read defined by the `id` field:
+
+ ```java
+ // Nothing happens until we subscribe to these Monos.
+ // findById will not return the user as user is not present.
+ final Mono<User> findByIdMono = repository.findById(testUser.getId());
+ final User findByIdUser = findByIdMono.block();
+ Assert.isNull(findByIdUser, "User must be null");
+ ```
* Item deletes using `deleteAll`:
- [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Delete)]
+ ```java
+ repository.deleteAll().block();
+ LOGGER.info("Deleted all data in container.");
+ ```
-* Derived query based on repository method name. Spring Data implements the `UserRepository` `findByFirstName` method as a Java SDK SQL query on the `firstName` field (this query could not be implemented as a point-read):
+* Derived query based on repository method name. Spring Data implements the `repository` `findByFirstName` method as a Java SDK SQL query on the `firstName` field. You can't implement this query as a point-read.
- [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Query)]
+ ```java
+ final Flux<User> firstNameUserFlux = repository.findByFirstName("testFirstName");
+ ```
## Run the app
-Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database.
+Now go back to the Azure portal to get your connection string information. Then, use the following steps to launch the app with your endpoint information so your app can communicate with your hosted database.
+
+1. In the Git terminal window, `cd` to the sample code folder.
-1. In the git terminal window, `cd` to the sample code folder.
+ ```bash
+ cd azure-spring-boot-samples/cosmos/spring-cloud-azure-starter-data-cosmos/spring-cloud-azure-data-cosmos-sample
+ ```
- ```bash
- cd azure-spring-data-cosmos-java-sql-api-getting-started/azure-spring-data-cosmos-java-getting-started/
- ```
+1. In the Git terminal window, use the following command to install the required Spring Data Azure Cosmos DB packages.
-2. In the git terminal window, use the following command to install the required Spring Datan Azure Cosmos DB packages.
+ ```bash
+ mvn clean package
+ ```
- ```bash
- mvn clean package
- ```
+1. In the Git terminal window, use the following command to start the Spring Data Azure Cosmos DB application:
-3. In the git terminal window, use the following command to start the Spring Datan Azure Cosmos DB application:
+ ```bash
+ mvn spring-boot:run
+ ```
- ```bash
- mvn spring-boot:run
- ```
-
-4. The app loads **application.properties** and connects the resources in your Azure Cosmos DB account.
-5. The app will perform point CRUD operations described above.
-6. The app will perform a derived query.
-7. The app doesn't delete your resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account if you want to avoid incurring charges.
+1. The app loads *application.yml* and connects the resources in your Azure Cosmos DB account.
+1. The app performs point CRUD operations described previously.
+1. The app performs a derived query.
+1. The app doesn't delete your resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account if you want to avoid incurring charges.
## Review SLAs in the Azure portal
Now go back to the Azure portal to get your connection string information and la
## Next steps
-In this quickstart, you've learned how to create an Azure Cosmos DB for NoSQL account, create a document database and container using the Data Explorer, and run a Spring Data app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account.
+In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account and create a document database and container using the Data Explorer. You then ran a Spring Data app to do the same thing programmatically. You can now import more data into your Azure Cosmos DB account.
Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
cosmos-db Samples Java Spring Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-java-spring-data.md
-# Azure Cosmos DB for NoSQL: Spring Datan Azure Cosmos DB v3 examples
+# Azure Cosmos DB for NoSQL: Spring Data Azure Cosmos DB v3 examples
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!div class="op_single_selector"]
> > [!IMPORTANT]
-> These release notes are for version 3 of Spring Datan Azure Cosmos DB. You can find [release notes for version 2 here](sdk-java-spring-data-v2.md).
+> These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find [release notes for version 2 here](sdk-java-spring-data-v2.md).
>
-> Spring Datan Azure Cosmos DB supports only the API for NoSQL.
+> Spring Data Azure Cosmos DB supports only the API for NoSQL.
> > See these articles for information about Spring Data on other Azure Cosmos DB APIs: > * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
The latest sample applications that perform CRUD operations and other common operations on Azure Cosmos DB resources are included in the [azure-spring-data-cosmos-java-sql-api-samples](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples) GitHub repository. This article provides:
-* Links to the tasks in each of the example Spring Datan Azure Cosmos DB project files.
+* Links to the tasks in each of the example Spring Data Azure Cosmos DB project files.
* Links to the related API reference content. **Prerequisites**
The latest sample applications that perform CRUD operations and other common ope
You need the following to run this sample application: * Java Development Kit 8
-* Spring Datan Azure Cosmos DB v3
+* Spring Data Azure Cosmos DB v3
-You can optionally use Maven to get the latest Spring Datan Azure Cosmos DB v3 binaries for use in your project. Maven automatically adds any necessary dependencies. Otherwise, you can directly download the dependencies listed in the **pom.xml** file and add them to your build path.
+You can optionally use Maven to get the latest Spring Data Azure Cosmos DB v3 binaries for use in your project. Maven automatically adds any necessary dependencies. Otherwise, you can directly download the dependencies listed in the **pom.xml** file and add them to your build path.
```bash <dependency>
cosmos-db Sdk Java Spring Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v2.md
Title: 'Spring Datan Azure Cosmos DB v2 for API for NoSQL release notes and resources'
-description: Learn about the Spring Datan Azure Cosmos DB v2 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
+ Title: 'Spring Data Azure Cosmos DB v2 for API for NoSQL release notes and resources'
+description: Learn about the Spring Data Azure Cosmos DB v2 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
-# Spring Datan Azure Cosmos DB v2 for API for NoSQL (legacy): Release notes and resources
+# Spring Data Azure Cosmos DB v2 for API for NoSQL (legacy): Release notes and resources
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] [!INCLUDE[SDK selector](../includes/cosmos-db-sdk-list.md)]
- Spring Datan Azure Cosmos DB version 2 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Datan Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
+ Spring Data Azure Cosmos DB version 2 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
> [!WARNING]
-> This version of Spring Datan Azure Cosmos DB SDK depends on a retired version of Azure Cosmos DB Java SDK. This Spring Datan Azure Cosmos DB SDK will be announced as retiring in the near future! This is *not* the latest Azure Spring Datan Azure Cosmos DB SDK for Azure Cosmos DB and is outdated. Because of performance issues and instability in Azure Spring Datan Azure Cosmos DB SDK V2, we highly recommend to use [Azure Spring Datan Azure Cosmos DB v3](sdk-java-spring-data-v3.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide to understand the difference in the underlying Java SDK V4.
+> This version of Spring Data Azure Cosmos DB SDK depends on a retired version of Azure Cosmos DB Java SDK. This Spring Data Azure Cosmos DB SDK will be announced as retiring in the near future! This is *not* the latest Azure Spring Data Azure Cosmos DB SDK for Azure Cosmos DB and is outdated. Because of performance issues and instability in Azure Spring Data Azure Cosmos DB SDK V2, we highly recommend to use [Azure Spring Data Azure Cosmos DB v3](sdk-java-spring-data-v3.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide to understand the difference in the underlying Java SDK V4.
> The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application.
-You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
+You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
> [!IMPORTANT]
-> These release notes are for version 2 of Spring Datan Azure Cosmos DB. You can find [release notes for version 3 here](sdk-java-spring-data-v3.md).
+> These release notes are for version 2 of Spring Data Azure Cosmos DB. You can find [release notes for version 3 here](sdk-java-spring-data-v3.md).
>
-> Spring Datan Azure Cosmos DB supports only the API for NoSQL.
+> Spring Data Azure Cosmos DB supports only the API for NoSQL.
> > See the following articles for information about Spring Data on other Azure Cosmos DB APIs: > * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure S
> > Want to get going fast? > 1. Install the [minimum supported Java runtime, JDK 8](/java/azure/jdk/), so you can use the SDK.
-> 2. Create a Spring Datan Azure Cosmos DB app by using the [starter](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). It's easy!
-> 3. Work through the [Spring Datan Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb), which walks through basic Azure Cosmos DB requests.
+> 2. Create a Spring Data Azure Cosmos DB app by using the [starter](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). It's easy!
+> 3. Work through the [Spring Data Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb), which walks through basic Azure Cosmos DB requests.
> > You can spin up Spring Boot Starter apps fast by using [Spring Initializr](https://start.spring.io/)! >
You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure S
| Resource | Link | ||| | **SDK download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/spring-data-cosmosdb) |
-|**API documentation** | [Spring Datan Azure Cosmos DB reference documentation]() |
-|**Contribute to the SDK** | [Spring Datan Azure Cosmos DB repo on GitHub](https://github.com/microsoft/spring-data-cosmosdb) |
+|**API documentation** | [Spring Data Azure Cosmos DB reference documentation]() |
+|**Contribute to the SDK** | [Spring Data Azure Cosmos DB repo on GitHub](https://github.com/microsoft/spring-data-cosmosdb) |
|**Spring Boot Starter**| [Azure Cosmos DB Spring Boot Starter client library for Java](https://github.com/MicrosoftDocs/azure-dev-docs/blob/master/articles/jav) | |**Spring TODO app sample with Azure Cosmos DB**| [End-to-end Java Experience in App Service Linux (Part 2)](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2) |
-|**Developer's guide** | [Spring Datan Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb) |
+|**Developer's guide** | [Spring Data Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb) |
|**Using Starter** | [How to use Spring Boot Starter with the Azure Cosmos DB for NoSQL](/azure/developer/jav) | |**Sample with Azure App Service** | [How to use Spring and Azure Cosmos DB with App Service on Linux](/azure/developer/java/spring-framework/configure-spring-app-with-cosmos-db-on-app-service-linux) <br> [TODO app sample](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2.git) |
cosmos-db Sdk Java Spring Data V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md
Title: 'Spring Datan Azure Cosmos DB v3 for API for NoSQL release notes and resources'
-description: Learn about the Spring Datan Azure Cosmos DB v3 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
+ Title: 'Spring Data Azure Cosmos DB v3 for API for NoSQL release notes and resources'
+description: Learn about the Spring Data Azure Cosmos DB v3 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK.
-# Spring Datan Azure Cosmos DB v3 for API for NoSQL: Release notes and resources
+# Spring Data Azure Cosmos DB v3 for API for NoSQL: Release notes and resources
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] [!INCLUDE[SDK selector](../includes/cosmos-db-sdk-list.md)]
-The Spring Datan Azure Cosmos DB version 3 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Datan Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
+The Spring Data Azure Cosmos DB version 3 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact.
The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model and framework for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application.
-You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
+You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/).
## Version Support Policy
This project supports multiple Spring Boot Versions. Visit [spring boot support
This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-data-version-support) for more information.
-### Which Version of Azure Spring Datan Azure Cosmos DB Should I Use
+### Which Version of Azure Spring Data Azure Cosmos DB Should I Use
-Azure Spring Datan Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure spring datan Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Datan Azure Cosmos DB to use with Spring Boot / Spring Cloud version.
+Azure Spring Data Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure Spring Data Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Data Azure Cosmos DB to use with Spring Boot / Spring Cloud version.
> [!IMPORTANT]
-> These release notes are for version 3 of Spring Datan Azure Cosmos DB.
+> These release notes are for version 3 of Spring Data Azure Cosmos DB.
>
-> Azure Spring Datan Azure Cosmos DB SDK has dependency on the Spring Data framework, and supports only the API for NoSQL.
+> Azure Spring Data Azure Cosmos DB SDK has dependency on the Spring Data framework, and supports only the API for NoSQL.
> > See these articles for information about Spring Data on other Azure Cosmos DB APIs: > * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)
Azure Spring Datan Azure Cosmos DB library supports multiple versions of Spring
## Get started fast
- Get up and running with Spring Datan Azure Cosmos DB by following our [Spring Boot Starter guide](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Datan Azure Cosmos DB connector.
+ Get up and running with Spring Data Azure Cosmos DB by following our [Spring Boot Starter guide](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Data Azure Cosmos DB connector.
- Alternatively, you can add the Spring Datan Azure Cosmos DB dependency to your `pom.xml` file as shown below:
+ Alternatively, you can add the Spring Data Azure Cosmos DB dependency to your `pom.xml` file as shown below:
```xml <dependency>
Azure Spring Datan Azure Cosmos DB library supports multiple versions of Spring
| Content | Link | |||
-| **Release notes** | [Release notes for Spring Datan Azure Cosmos DB SDK v3](https://github.com/Azure/azure-sdk-for-jav) |
-| **SDK Documentation** | [Azure Spring Datan Azure Cosmos DB SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) |
+| **Release notes** | [Release notes for Spring Data Azure Cosmos DB SDK v3](https://github.com/Azure/azure-sdk-for-jav) |
+| **SDK Documentation** | [Azure Spring Data Azure Cosmos DB SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) |
| **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) | | **API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) | | **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-spring-data-cosmos) |
-| **Get started** | [Quickstart: Build a Spring Datan Azure Cosmos DB app to manage Azure Cosmos DB for NoSQL data](./quickstart-java-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) |
-| **Basic code samples** | [Azure Cosmos DB: Spring Datan Azure Cosmos DB examples for the API for NoSQL](samples-java-spring-data.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)|
+| **Get started** | [Quickstart: Build a Spring Data Azure Cosmos DB app to manage Azure Cosmos DB for NoSQL data](./quickstart-java-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) |
+| **Basic code samples** | [Azure Cosmos DB: Spring Data Azure Cosmos DB examples for the API for NoSQL](samples-java-spring-data.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)|
| **Performance tips**| [Performance tips for Java SDK v4 (applicable to Spring Data)](performance-tips-java-sdk-v4.md)| | **Troubleshooting** | [Troubleshoot Java SDK v4 (applicable to Spring Data)](troubleshoot-java-sdk-v4.md) | | **Azure Cosmos DB workshops and labs** |[Azure Cosmos DB workshops home page](https://aka.ms/cosmosworkshop)
It's strongly recommended to use version 3.28.1 and above.
## Additional notes
-* Spring Datan Azure Cosmos DB supports Java JDK 8 and Java JDK 11.
+* Spring Data Azure Cosmos DB supports Java JDK 8 and Java JDK 11.
## FAQ
cosmos-db Tutorial Springboot Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-springboot-azure-kubernetes-service.md
Here are some of the key points related to the Kubernetes resources for this app
In this tutorial, you've learned how to deploy a Spring Boot application to Azure Kubernetes Service and use it to perform operations on data in an Azure Cosmos DB for NoSQL account. > [!div class="nextstepaction"]
-> [Spring Datan Azure Cosmos DB v3 for API for NoSQL](sdk-java-spring-data-v3.md)
+> [Spring Data Azure Cosmos DB v3 for API for NoSQL](sdk-java-spring-data-v3.md)
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
Previously updated : 08/12/2022 Last updated : 02/22/2023 # Azure Data Factory managed virtual network
The column **Using private endpoint** is always shown as blank even if you creat
:::image type="content" source="./media/managed-vnet/akv-pe.png" alt-text="Screenshot that shows a private endpoint for Key Vault.":::
+### Fully Qualified Domain Name ( FQDN ) of Azure HDInsight
+
+If you created a custom private link service, FQDN should end with **azurehdinsight.net** without leading *privatelink* in domain name when you create a private end point. If you use privatelink in domain name, make sure it is valid and you are able to resolve it.
+ ### Access constraints in managed virtual network with private endpoints You're unable to access each PaaS resource when both sides are exposed to Private Link and a private endpoint. This issue is a known limitation of Private Link and private endpoints.
defender-for-cloud Devops Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md
Title: Defender for DevOps FAQ description: If you're having issues with Defender for DevOps perhaps, you can solve it with these frequently asked questions. Previously updated : 01/26/2023 Last updated : 02/23/2023 # Defender for DevOps frequently asked questions (FAQ)
If you're having issues with Defender for DevOps these frequently asked question
- [Is Exemptions capability available and tracked for app sec vulnerability management](#is-exemptions-capability-available-and-tracked-for-app-sec-vulnerability-management) - [Is continuous, automatic scanning available?](#is-continuous-automatic-scanning-available) - [Is it possible to block the developers committing code with exposed secrets](#is-it-possible-to-block-the-developers-committing-code-with-exposed-secrets)-- [I am not able to configure Pull Request Annotations](#i-am-not-able-to-configure-pull-request-annotations)-- [What are the programing languages that are supported by Defender for DevOps?](#what-are-the-programing-languages-that-are-supported-by-defender-for-devops) -- [I'm getting the There's no CLI tool error in Azure DevOps](#im-getting-the-theres-no-cli-tool-error-in-azure-devops)-
+- [I'm not able to configure Pull Request Annotations](#im-not-able-to-configure-pull-request-annotations)
+- [What programming languages are supported by Defender for DevOps?](#what-programming-languages-are-supported-by-defender-for-devops)
+- [I'm getting an error that informs me that there's no CLI tool](#im-getting-an-error-that-informs-me-that-theres-no-cli-tool)
### I'm getting an error while trying to connect
-When selecting the *Authorize* button, the presently signed-in account is used, which could be the same email but different tenant. Make sure you have the right account/tenant combination selected in the popup consent screen and Visual Studio.
+When you select the *Authorize* button, the account that you're logged in with is used. That account can have the same email but may have a different tenant. Make sure you have the right account/tenant combination selected in the popup consent screen and Visual Studio.
-The presently signed-in account can be checked [here](https://app.vssps.visualstudio.com/profile/view).
+You can [check which account is signed in](https://app.vssps.visualstudio.com/profile/view).
### Why can't I find my repository
-Only TfsGit is supported on Azure DevOps service.
+The Azure DevOps service only supports `TfsGit`.
-Ensure that you've [onboarded your repositories](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Microsoft Defender for Cloud. If you still can't see your repository, ensure that you're signed in with the correct Azure DevOps organization user account. Your Azure subscription and Azure DevOps Organization need to be in the same tenant. If the user for the connector is wrong, you need to delete the connector that was created, sign in with the correct user account and re-create the connector.
+Ensure that you've [onboarded your repositories](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Microsoft Defender for Cloud. If you still can't see your repository, ensure that you're signed in with the correct Azure DevOps organization user account. Your Azure subscription and Azure DevOps Organization need to be in the same tenant. If the user for the connector is wrong, you need to delete the previously created connector, sign in with the correct user account and re-create the connector.
### Secret scan didn't run on my code
In addition to onboarding resources, you must have the [Microsoft Security DevOp
If no secrets are identified through scans, the total exposed secret for the resource shows `Healthy` in Defender for Cloud.
-If secret scan isn't enabled (meaning MSDO isn't configured for your pipeline) or a scan isn't performed for at least 14 days, the resource will show as `N/A` in Defender for Cloud.
+If secret scan isn't enabled (meaning MSDO isn't configured for your pipeline) or a scan isn't performed for at least 14 days, the resource shows as `N/A` in Defender for Cloud.
### I donΓÇÖt see generated SARIF file in the path I chose to drop it
Azure DevOps repositories only have the total exposed secrets available and will
For a previously unhealthy scan result to be healthy again, updated healthy scan results need to be from the same build definition as the one that generated the findings in the first place. A common scenario where this issue occurs is when testing with different pipelines. For results to refresh appropriately, scan results need to be for the same pipeline(s) and branch(es).
-If no scanning is performed for 14 days, the scan results would be revert to ΓÇ£N/AΓÇ¥.
+If no scan is performed for 14 days, the scan results revert to `N/A`.
### I donΓÇÖt see Recommendations for findings
Learn more about [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/?
### Is Exemptions capability available and tracked for app sec vulnerability management?
-Exemptions are not available for Defender for DevOps within Microsoft Defender for Cloud.
+Exemptions aren't available for Defender for DevOps within Microsoft Defender for Cloud.
### Is continuous, automatic scanning available?
Currently scanning occurs at build time.
### Is it possible to block the developers committing code with exposed secrets?
-The ability to block developers from committing code with exposed secrets is not currently available.
+The ability to block developers from committing code with exposed secrets isn't currently available.
-### I am not able to configure Pull Request Annotations
+### I'm not able to configure Pull Request Annotations
Make sure you have write (owner/contributor) access to the subscription.
-### What are the programing languages that are supported by Defender for DevOps?
+### What programming languages are supported by Defender for DevOps?
The following languages are supported by Defender for DevOps: - Python-- Java Script-- Type Script
+- JavaScript
+- TypeScript
+
+### I'm getting an error that informs me that there's no CLI tool
+
+When you run the pipeline in Azure DevOps, you receive the following error:
+`no such file or directory, scandir 'D:\a\_msdo\versions\microsoft.security.devops.cli'`.
+
-### I'm getting the There's no CLI tool error in Azure DevOps
+This error can be seen in the extensions job as well.
-If when running the pipeline in Azure DevOps, you receive the following error:
-"no such file or directory, scandir 'D:\a\_msdo\versions\microsoft.security.devops.cli'".
-This error occurs if you are missing the dependency of `dotnet6` in the pipeline's YAML file. DotNet6 is required to allow the Microsoft Security DevOps extension to run. Include this as a task in your YAML file to eliminate the error.
+This error occurs if you're missing the dependency of `dotnet6` in the pipeline's YAML file. DotNet6 is required to allow the Microsoft Security DevOps extension to run. Include this as a task in your YAML file to eliminate the error.
You can learn more about [Microsoft Security DevOps](https://marketplace.visualstudio.com/items?itemName=ms-securitydevops.microsoft-security-devops-azdevops).
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
For more information, see [Securing IoT devices in the enterprise](concept-enter
## Managing OT alerts in a hybrid environment
-Users working in hybrid environments may be managing OT alerts in Defender for IoT on the Azure portal, the OT sensor, and an on-premises management console.
+Users working in hybrid environments may be managing OT alerts in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console.
Alert statuses are fully synchronized between the Azure portal and the OT sensor, and between the sensor and the on-premises management console. This means that regardless of where you manage the alert in Defender for IoT, the alert is updated in other locations as well.
defender-for-iot Architecture Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md
Title: OT sensor cloud connection methods - Microsoft Defender for IoT description: Learn about the architecture models available for connecting your sensors to Microsoft Defender for IoT. Previously updated : 09/11/2022 Last updated : 02/23/2023 # OT sensor cloud connection methods
For more information, see [Connect via proxy chaining](connect-sensors.md#connec
## Direct connections
-The following image shows how you can connect your sensors to the Defender for IoT portal in Azure directly over the internet from remote sites, without transversing the enterprise network.
+The following image shows how you can connect your sensors to the Defender for IoT portal in Azure directly over the internet from remote sites, without traversing the enterprise network.
With direct connections
defender-for-iot Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md
While the number of IoT devices continues to grow, they often lack the security
## IoT security across Microsoft 365 Defender and Azure
-Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and Azure portals using the following methods:
+Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and [Azure portals](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) using the following methods:
|Method |Description and requirements | Configure in ... | ||||
defender-for-iot Configure Sensor Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md
Define a new setting whenever you want to define a specific configuration for on
**To define a new setting**:
-1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**.
1. On the **Sensor settings (Preview)** page, select **+ Add**, and then use the wizard to define the following values for your setting. Select **Next** when you're done with each tab in the wizard to move to the next step.
Your new setting is now listed on the **Sensor settings (Preview)** page under i
**To view the current settings already defined for your subscription**:
-1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**
The **Sensor settings (Preview)** page shows any settings already defined for your subscriptions, listed by setting type. Expand or collapse each type to view detailed configurations. For example:
defender-for-iot Faqs Eiot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-eiot.md
Enterprise IoT is designed to help customers secure un-managed devices throughou
For more information, see [Onboard with Microsoft Defender for IoT](eiot-defender-for-endpoint.md). -- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in Defender for IoT in the Azure portal. Register an Enterprise IoT network sensor, currently in **Public preview** to gain visibility to additional devices that aren't covered by Defender for Endpoint.
+- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. Register an Enterprise IoT network sensor, currently in **Public preview** to gain visibility to additional devices that aren't covered by Defender for Endpoint.
For more information, see [Enhance device discovery with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
To make any changes to an existing plan, you'll need to cancel your existing pla
To remove only Enterprise IoT from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see [Cancel your Defender for IoT plan](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan).
-To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in Defender for IoT in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
+To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
## What happens when the 30-day trial ends?
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
This procedure describes how to add a trial Defender for IoT plan for OT network
**To add your plan**:
-1. In the Azure portal, go to **Defender for IoT** and select **Plans and pricing** > **Add plan**.
+1. In the Azure portal, go to [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) and select **Plans and pricing** > **Add plan**.
1. In the **Plan settings** pane, define the following settings:
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
For more information, see [Azure user roles and permissions for Defender for IoT
## View alerts on the Azure portal
-1. In Defender for IoT on the Azure portal, select the **Alerts** page on the left. By default, the following details are shown in the grid:
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left. By default, the following details are shown in the grid:
| Column | Description |--|--|
Supported grouping options include *Engine*, *Name*, *Sensor*, *Severity*, and *
## Manage alert severity and status
-We recommend that you update alert severity as soon as you've triaged an alert so that you can prioritize the riskiest alerts as soon as possible. Make sure to update your alert status once you've taken remediation steps so that the progress is recorded.
+We recommend that you update alert severity In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal as soon as you've triaged an alert so that you can prioritize the riskiest alerts as soon as possible. Make sure to update your alert status once you've taken remediation steps so that the progress is recorded.
You can update both severity and status for a single alert or for a selection of alerts in bulk.
Downloading the PCAP file can take several minutes, depending on the quality of
You may want to export a selection of alerts to a CSV file for offline sharing and reporting.
-1. In Defender for IoT on the Azure portal, select the **Alerts** page on the left.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left.
1. Use the search box and filter options to show only the alerts you want to export.
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
# Manage your device inventory from the Azure portal
-Use the **Device inventory** page in the Azure portal to manage all network devices detected by cloud-connected sensors, including OT, IoT, and IT. Identify new devices detected, devices that might need troubleshooting, and more.
+Use the **Device inventory** page in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal to manage all network devices detected by cloud-connected sensors, including OT, IoT, and IT. Identify new devices detected, devices that might need troubleshooting, and more.
For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device).
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
Select the link in each widget to drill down for more information in your sensor
### Validate connectivity status
-Verify that your sensor is successfully connected to the Azure portal directly from the sensor's **Overview** page.
+Verify that your sensor is successfully connected to the Azure portal directly from the sensor's **Overview** page.
If there are any connection issues, a disconnection message is shown in the **General Settings** area on the **Overview** page, and a **Service connection error** warning appears at the top of the page in the :::image type="icon" source="media/how-to-manage-individual-sensors/bell-icon.png" border="false"::: **System Messages** area. For example:
If there are any connection issues, a disconnection message is shown in the **Ge
:::image type="content" source="media/how-to-manage-individual-sensors/system-messages.png" alt-text="Screenshot of the system messages pane." lightbox="media/how-to-manage-individual-sensors/system-messages.png"::: - ## Download software for OT sensors You may need to download software for your OT sensor if you're [installing Defender for IoT software](ot-deploy/install-software-ot-sensor.md) on your own appliances, or [updating software versions](update-ot-software.md).
-In Defender for IoT in the Azure portal, use one of the following options:
+In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, use one of the following options:
- For a new installation, select **Getting started** > **Sensor**. Select a version in the **Purchase an appliance and install software** area, and then select **Download**.
You'll need an SMTP mail server configured to enable email alerts about disconne
**Prerequisites**:
-Make sure you can reach the SMTP server from the [sensor's management port](./best-practices/understand-network-architecture.md).
+Make sure you can reach the SMTP server from the [sensor's management port](./best-practices/understand-network-architecture.md).
**To configure an SMTP server on your sensor**:
defender-for-iot How To Manage The On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md
This article covers on-premises management console options like backup and resto
You may need to download software for your on-premises management console if you're [installing Defender for IoT software](ot-deploy/install-software-on-premises-management-console.md) on your own appliances, or [updating software versions](update-ot-software.md).
-In Defender for IoT in the Azure portal, use one of the following options:
+In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, use one of the following options:
-- For a new installation or standalone update, select **Getting started** > **On-premises management console**.
+- For a new installation or standalone update, select **Getting started** > **On-premises management console**.
- - For a new installation, select a version in the **Purchase an appliance and install software** area, and then select **Download**.
+ - For a new installation, select a version in the **Purchase an appliance and install software** area, and then select **Download**.
- For an update, select your update scenario in the **On-premises management console** area and then select **Download**. - If you're updating your on-premises management console together with connected OT sensors, use the options in the **Sites and sensors** page > **Sensor update (Preview)** menu.
In Defender for IoT in the Azure portal, use one of the following options:
[!INCLUDE [root-of-trust](includes/root-of-trust.md)] For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md#update-an-on-premises-management-console).+ ## Upload an activation file When you first sign in, an activation file for the on-premises management console is downloaded. This file contains the aggregate committed devices that are defined during the onboarding process. The list includes sensors associated with multiple subscriptions.
defender-for-iot How To Work With Threat Intelligence Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md
To perform the procedures in this article, make sure that you have:
- Relevant permissions on the Azure portal and any OT network sensors or on-premises management console you want to update.
- - **To download threat intelligence packages from the Azure portal**, you need access to the Azure portal as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
+ - **To download threat intelligence packages from the Azure portal**, you need access to the Azure portal as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
- - **To push threat intelligence updates to cloud-connected OT sensors from the Azure portal**, you need access to Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
+ - **To push threat intelligence updates to cloud-connected OT sensors from the Azure portal**, you need access to Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
- - **To manually upload threat intelligence packages to OT sensors or on-premises management consoles**, you need access to the OT sensor or on-premises management console as an **Admin** user.
+ - **To manually upload threat intelligence packages to OT sensors or on-premises management consoles**, you need access to the OT sensor or on-premises management console as an **Admin** user.
For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). - ## View the most recent threat intelligence package To view the most recent package delivered, in the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**.
Update threat intelligence packages on your OT sensors using any of the followin
### Automatically push updates to cloud-connected sensors
-Threat intelligence packages can be automatically updated to *cloud connected* sensors as they're released by Defender for IoT.
+Threat intelligence packages can be automatically updated to *cloud connected* sensors as they're released by Defender for IoT.
Ensure automatic package update by onboarding your cloud connected sensor with the **Automatic Threat Intelligence Updates** option enabled. For more information, see [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor). **To change the update mode after you've onboarded your OT sensor**:
-1. In Defender for IoT on the Azure portal, select **Sites and sensors**, and then locate the sensor you want to change.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**, and then locate the sensor you want to change.
1. Select the options (**...**) menu for the selected OT sensor > **Edit**. 1. Toggle on or toggle off the **Automatic Threat Intelligence Updates** option as needed.
Your *cloud connected* sensors can be automatically updated with threat intellig
**To manually push updates to a single OT sensor**:
-1. In Defender for IoT on the Azure portal, select **Sites and sensors**, and locate the OT sensor you want to update.
-1. Select the options (**...**) menu for the selected sensor and then select **Push Threat Intelligence update**.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**, and locate the OT sensor you want to update.
+1. Select the options (**...**) menu for the selected sensor and then select **Push Threat Intelligence update**.
The **Threat Intelligence update status** field displays the update progress. **To manually push updates to multiple OT sensors**:
-1. In Defender for IoT on the Azure portal, select **Sites and sensors**. Locate and select the OT sensors you want to update.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**. Locate and select the OT sensors you want to update.
1. Select **Threat intelligence updates (Preview)** > **Remote update**. The **Threat Intelligence update status** field displays the update progress for each selected sensor.
If you're also working with an on-premises management console, we recommend that
**To download threat intelligence packages**:
-1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**.
1. In the **Sensor TI update** pane, select **Download** to download the latest threat intelligence file. For example:
On each OT sensor, the threat intelligence update status and version information
For cloud-connected OT sensors, threat intelligence data is also shown in the **Sites and sensors** page. To view threat intelligence statues from the Azure portal:
-1. In Defender for IoT on the Azure portal, select **Site and sensors**.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Site and sensors**.
1. Locate the OT sensors where you want to check the threat intelligence statues.
For cloud-connected OT sensors, threat intelligence data is also shown in the **
> [!TIP] > If a cloud-connected OT sensor shows that a threat intelligence update has failed, we recommend that your check your sensor connection details. On the **Sites and sensors** page, check the **Sensor status** and **Last connected UTC** columns. - ## Next steps For more information, see:
defender-for-iot Manage Subscriptions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md
This procedure describes how to add an Enterprise IoT plan to your Azure subscri
:::image type="content" source="media/enterprise-iot/defender-for-endpoint-onboard.png" alt-text="Screenshot of the Enterprise IoT tab in Defender for Endpoint." lightbox="media/enterprise-iot/defender-for-endpoint-onboard.png":::
-After you've onboarded your plan, you'll see it listed in Defender for IoT in the Azure portal. Go to the Defender for IoT **Plans and pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example:
+After you've onboarded your plan, you'll see it listed in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. Go to the Defender for IoT **Plans and pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example:
:::image type="content" source="media/enterprise-iot/eiot-plan-in-azure.png" alt-text="Screenshot of an Enterprise IoT plan showing in the Defender for IoT Plans and pricing page.":::
defender-for-iot Respond Ot Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/respond-ot-alert.md
Triage alerts on a regular basis to prevent alert fatigue in your network and en
**To triage alerts**:
-1. In Defender for IoT in the Azure portal, go to the **Alerts** page. By default, alerts are sorted by the **Last detection** column, from most recent to oldest alert, so that you can first see the latest alerts in your network.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, go to the **Alerts** page. By default, alerts are sorted by the **Last detection** column, from most recent to oldest alert, so that you can first see the latest alerts in your network.
1. Use other filters, such as **Sensor** or **Severity** to find specific alerts.
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
This procedure describes how to send a software version update to one or more OT
### Send the software update to your OT sensor
-1. In Defender for IoT in the Azure portal, select **Sites and sensors** and then locate the OT sensors with legacy, but [supported versions](#prerequisites) installed.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, select **Sites and sensors** and then locate the OT sensors with legacy, but [supported versions](#prerequisites) installed.
If you know your site and sensor name, you can browse or search for it directly. Alternately, filter the sensors listed to show only cloud-connected, OT sensors that have *Remote updates supported*, and have legacy software version installed. For example:
This procedure describes how to manually download the new sensor software versio
### Download the update package from the Azure portal
-1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
1. In the **Local update** pane, select the software version that's currently installed on your sensors.
The software version on your on-premises management console must be equal to tha
> ### Download the update packages from the Azure portal
-1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
1. In the **Local update** pane, select the software version that's currently installed on your sensors.
This procedure describes how to update OT sensor software via the CLI, directly
### Download the update package from the Azure portal
-1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**.
1. In the **Local update** pane, select the software version that's currently installed on your sensors.
Updating an on-premises management console takes about 30 minutes.
This procedure describes how to download an update package for a standalone update. If you're updating your on-premises management console together with connected sensors, we recommend using the **[Update sensors (Preview)](#update-ot-sensors)** menu from on the **Sites and sensors** page instead.
-1. In Defender for IoT on the Azure portal, select **Getting started** > **On-premises management console**.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Getting started** > **On-premises management console**.
1. In the **On-premises management console** area, select the download scenario that best describes your update, and then select **Download**.
For more information, see [Versioning and support for on-premises software versi
**To update a legacy OT sensor version**
-1. In Defender for IoT on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update.
+1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update.
1. Select the **Prepare to update to 22.X** option from the toolbar or from the options (**...**) from the sensor row.
event-grid Event Schema Data Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-data-box.md
+
+ Title: Azure Data Box as Event Grid source
+description: Describes the properties that are provided for Data Box events with Azure Event Grid.
+ Last updated : 02/09/2023++
+# Azure Data Box as an Event Grid source
+
+This article provides the properties and schema for Azure Data Box events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md).
+
+## Data Box events
+
+ |Event name |Description|
+ |-|--|
+ | Microsoft.DataBox.CopyStarted |Triggered when the copy has started from the device and the first byte of data copy is copied. |
+ |Microsoft.DataBox.CopyCompleted |Triggered when the copy has completed from device.|
+ | Microsoft.DataBox.OrderCompleted |Triggered when the order has completed copying and copy logs are available. |
+
+### Example events
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+### Microsoft.DataBox.CopyStarted event
+
+```json
+[{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}",
+ "subject": "/jobs/{your-resource}",
+ "eventType": "Microsoft.DataBox.CopyStarted",
+ "id": "049ec3f6-5b7d-4052-858e-6f4ce6a46570",
+ "data": {
+ "serialNumber": "SampleSerialNumber",
+ "stageName": "CopyStarted",
+ "stageTime": "2022-10-12T19:38:08.0218897Z"
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2022-10-16T02:51:26.4248221Z"
+}]
+```
+
+### Microsoft.DataBox.CopyCompleted event
+
+```json
+[{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}",
+ "subject": "/jobs/{your-resource}",
+ "eventType": "Microsoft.DataBox.CopyCompleted",
+ "id": "759c892a-a628-4e48-a116-2e1d54c555ce",
+ "data": {
+ "serialNumber": "SampleSerialNumber",
+ "stageName": "CopyCompleted",
+ "stageTime": "2022-10-12T19:38:08.0218897Z"
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2022-10-16T02:58:18.503829Z"
+}]
+```
+
+### Microsoft.DataBox.OrderCompleted event
+
+```json
+{
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}",
+ "subject": "/jobs/{your-resource}",
+ "eventType": "Microsoft.DataBox.OrderCompleted",
+ "id": "5eb07c79-39a8-439c-bb4b-bde1f6267c37",
+ "data": {
+ "serialNumber": "SampleSerialNumber",
+ "stageName": "OrderCompleted",
+ "stageTime": "2022-10-12T19:38:08.0218897Z"
+ },
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2022-10-16T02:51:26.4248221Z"
+}
+```
+
+# [Cloud event schema](#tab/cloud-event-schema)
+
+### Microsoft.DataBox.CopyStarted event
+
+```json
+[{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}",
+ "subject": "/jobs/{your-resource}",
+ "type": "Microsoft.DataBox.CopyStarted",
+ "time": "2022-10-16T02:51:26.4248221Z",
+ "id": "049ec3f6-5b7d-4052-858e-6f4ce6a46570",
+ "data": {
+ "serialNumber": "SampleSerialNumber",
+ "stageName": "CopyStarted",
+ "stageTime": "2022-10-12T19:38:08.0218897Z"
+ },
+ "specVersion": "1.0"
+}]
+```
+
+### Microsoft.DataBox.CopyCompleted event
+
+```json
+{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}",
+ "subject": "/jobs/{your-resource}",
+ "type": "Microsoft.DataBox.CopyCompleted",
+ "time": "2022-10-16T02:51:26.4248221Z",
+ "id": "759c892a-a628-4e48-a116-2e1d54c555ce",
+ "data": {
+ "serialNumber": "SampleSerialNumber",
+ "stageName": "CopyCompleted",
+ "stageTime": "2022-10-12T19:38:08.0218897Z"
+ },
+ "specVersion": "1.0"
+}
+```
+
+### Microsoft.DataBox.OrderCompleted event
+
+```json
+[{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}",
+ "subject": "/jobs/{your-resource}",
+ "type": "Microsoft.DataBox.OrderCompleted",
+ "time": "2022-10-16T02:51:26.4248221Z",
+ "id": "5eb07c79-39a8-439c-bb4b-bde1f6267c37",
+ "data": {
+ "serialNumber": "SampleSerialNumber",
+ "stageName": "OrderCompleted",
+ "stageTime": "2022-10-12T19:38:08.0218897Z"
+ },
+ "specVersion": "1.0"
+}]
+```
+++
+## Next steps
+
+* For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md)
+* For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
frontdoor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/best-practices.md
Previously updated : 12/05/2022 Last updated : 02/23/2023
Front Door's features work best when traffic only flows through Front Door. You
When you work with Front Door by using APIs, ARM templates, Bicep, or Azure SDKs, it's important to use the latest available API or SDK version. API and SDK updates occur when new functionality is available, and also contain important security patches and bug fixes.
+### Configure logs
+
+Front Door tracks extensive telemetry about every request. When you enable caching, your origin servers might not receive every request, so it's important that you use the Front Door logs to understand how your solution is running and responding to your clients. For more information about the metrics and logs that Azure Front Door records, see [Monitor metrics and logs in Azure Front Door](front-door-diagnostics.md) and [WAF logs](../web-application-firewall/afds/waf-front-door-monitor.md#waf-logs).
+
+To configure logging for your own application, see [Configure Azure Front Door logs](./standard-premium/how-to-logs.md)
+ ## TLS best practices ### Use end-to-end TLS
You can configure Front Door to automatically redirect HTTP requests to use the
### Use managed TLS certificates
-When Front Door manages your TLS certificates, it reduces your operational costs, and helps you to avoid costly outages caused by forgetting to renew a certificate. Front Door automatically issues and rotates managed TLS certificates.
+When Front Door manages your TLS certificates, it reduces your operational costs, and helps you to avoid costly outages caused by forgetting to renew a certificate. Front Door automatically issues and rotates the managed TLS certificates.
For more information, see [Configure HTTPS on an Azure Front Door custom domain using the Azure portal](standard-premium/how-to-configure-https-custom-domain.md).
For more information, see [Select the certificate for Azure Front Door to deploy
### Use the same domain name on Front Door and your origin
-Front Door can rewrite the `Host` header of incoming requests. This feature can be helpful when you manage a set of customer-facing custom domain names that route to a single origin. The feature can also help when you want to avoid configuring custom domain names in Front Door and at your origin. However, when you rewrite the `Host` header, request cookies and URL redirections might break. In particular, when you use platforms like Azure App Service, features like [session affinity](../app-service/configure-common.md#configure-general-settings) and [authentication and authorization](../app-service/overview-authentication-authorization.md) might not work correctly.
+Front Door can rewrite the `Host` header of incoming requests. This feature can be helpful when you manage a set of customer-facing custom domain names that route to a single origin. This feature can also help when you want to avoid configuring custom domain names in Front Door and at your origin. However, when you rewrite the `Host` header, request cookies and URL redirections might break. In particular, when you use platforms like Azure App Service, features like [session affinity](../app-service/configure-common.md#configure-general-settings) and [authentication and authorization](../app-service/overview-authentication-authorization.md) might not work correctly.
Before you rewrite the `Host` header of your requests, carefully consider whether your application is going to work correctly.
For more information, see [Preserve the original HTTP host name between a revers
### Enable the WAF
-For internet-facing applications, we recommend you enable the Front Door web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a range of attacks.
+For internet-facing applications, we recommend you enable the Front Door web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a wide range of attacks.
For more information, see [Web Application Firewall (WAF) on Azure Front Door](web-application-firewall.md).
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
If there's more than one key-value pair in a query string of a request then thei
When you configure caching, you specify how the cache should handle query strings. The following behaviors are supported:
-* **Ignore query strings**: In this mode, Azure Front Door passes the query strings from the client to the origin on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
+* **Ignore Query String**: In this mode, Azure Front Door passes the query strings from the client to the origin on the first request and caches the asset. Future requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires.
-* **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the origin for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
+* **Use Query String**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the origin for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting.
+
+ The order of the query string parameters doesn't matter. For example, if the Azure Front Door environment includes a cached response for the URL `www.example.ashx?q=test1&r=test2`, then a request for `www.example.ashx?r=test2&q=test1` is also served from the cache.
::: zone pivot="front-door-standard-premium"
-* **Specify cache key query string** behavior to include or exclude specified parameters when the cache key is generated.
+* **Ignore Specified Query Strings** and **Include Specified Query Strings**: In this mode, you can configure Azure Front Door to include or exclude specified parameters when the cache key is generated.
- For example, suppose that the default cache key is `/foo/image/asset.html`, and a request is made to the URL `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. If there's a rules engine rule to exclude the `userid` query string parameter, then the query string cache key would be `/foo/image/asset.html?language=EN&sessionid=200`.
+ For example, suppose that the default cache key is `/foo/image/asset.html`, and a request is made to the URL `https://contoso.com/foo/image/asset.html?language=EN&userid=100&sessionid=200`. If there's a rules engine rule to exclude the `userid` query string parameter, then the query string cache key would be `/foo/image/asset.html?language=EN&sessionid=200`.
Configure the query string behavior on the Front Door route.
In addition, Front Door attaches the `X-Cache` header to all responses. The `X-C
- `PRIVATE_NOSTORE`: Request can't be cached because the *Cache-Control* response header is set to either *private* or *no-store*. - `CONFIG_NOCACHE`: Request is configured to not cache in the Front Door profile.
+## Logs and reports
+ ::: zone pivot="front-door-standard-premium"
-## Logs and reports
+The [access log](front-door-diagnostics.md#access-log) includes the cache status for each request. Also, [reports](standard-premium/how-to-reports.md#caching-report) include information about how Azure Front Door's cache is used in your application.
++
-The [Front Door Access Log](standard-premium/how-to-logs.md#access-log) includes the cache status for each request. Also, [reports](standard-premium/how-to-reports.md#caching) include information about how Front Door's cache is used in your application.
+The [access log](front-door-diagnostics.md#access-log) includes the cache status for each request.
::: zone-end
Cache behavior and duration can be configured in Rules Engine. Rules Engine cach
* **When caching is disabled**, Azure Front Door doesnΓÇÖt cache the response contents, irrespective of the origin response directives.
-* **When caching is enabled**, the cache behavior differs based on the cache behavior value applied by the Rules Engine:
+* **When caching is enabled**, the cache behavior is different depending on the cache behavior value applied by the Rules Engine:
* **Honor origin**: Azure Front Door will always honor origin response header directive. If the origin directive is missing, Azure Front Door will cache contents anywhere from one to three days. * **Override always**: Azure Front Door will always override with the cache duration, meaning that it will cache the contents for the cache duration ignoring the values from origin response directives. This behavior will only be applied if the response is cacheable.
frontdoor Front Door Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-diagnostics.md
Title: Monitor metrics and logs in Azure Front Door (classic)
-description: This article describes the different metrics and access logs that Azure Front Door (classic) supports
+ Title: Monitor metrics and logs - Azure Front Door
+description: This article describes the different metrics and logs that Azure Front Door records.
Previously updated : 03/22/2022 Last updated : 02/23/2023
+zone_pivot_groups: front-door-tiers
-# Monitor metrics and logs in Azure Front Door (classic)
+# Monitor metrics and logs in Azure Front Door
+
+Azure Front Door provides several features to help you monitor your application, track requests, and debug your Front Door configuration.
+
+Logs and metrics are stored and managed by [Azure Monitor](../azure-monitor/overview.md).
++
+[Reports](standard-premium/how-to-reports.md) provide insight into how your traffic is flowing through Azure Front Door, the web application firewall (WAF), and to your application.
+
+## Metrics
+
+Azure Front Door measures and sends its metrics in 60-second intervals. The metrics can take up to 3 minutes to be processed by Azure Monitor, and they might not appear until processing is completed. Metrics can also be displayed in charts or grids, and are accessible through the Azure portal, Azure PowerShell, the Azure CLI, and the Azure Monitor APIs. For more information, seeΓÇ»[Azure Monitor metrics](../azure-monitor/essentials/data-platform-metrics.md).
+
+The metrics listed in the following table are recorded and stored free of charge for a limited period of time. For an extra cost, you can store for a longer period of time.
+
+| Metrics | Description | Dimensions |
+| - | - | - |
+| Byte Hit Ratio | The percentage of traffic that was served from the Azure Front Door cache, computed against the total egress traffic. The byte hit ratio is low if most of the traffic is forwarded to the origin rather than served from the cache. <br/><br/> **Byte Hit Ratio** = (egress from edge - egress from origin)/egress from edge. <br/><br/> Scenarios excluded from bytes hit ratio calculations:<ul><li>You explicitly disable caching, either through the Rules Engine or query string caching behavior.</li><li>You explicitly configure a `Cache-Control` directive with the `no-store` or `private` cache directives.</li></ul> | Endpoint |
+| Origin Health Percentage | The percentage of successful health probes sent from Azure Front Door to origins.| Origin, Origin Group |
+| Origin Latency | The time calculated from when the request was sent by the Azure Front Door edge to the origin until Azure Front Door received the last response byte from the origin. | Endpoint, Origin |
+| Origin Request Count | The number of requests sent from Azure Front Door to origins. | Endpoint, Origin, HTTP Status, HTTP Status Group |
+| Percentage of 4XX | The percentage of all the client requests for which the response status code is 4XX. | Endpoint, Client Country, Client Region |
+| Percentage of 5XX | The percentage of all the client requests for which the response status code is 5XX. | Endpoint, Client Country, Client Region |
+| Request Count | The number of client requests served through Azure Front Door, including requests served entirely from the cache. | Endpoint, Client Country, Client Region, HTTP Status, HTTP Status Group |
+| Request Size | The number of bytes sent in requests from clients to Azure Front Door. | Endpoint, Client Country, client Region, HTTP Status, HTTP Status Group |
+| Response Size | The number of bytes sent as responses from Front Door to clients. |Endpoint, client Country, client Region, HTTP Status, HTTP Status Group |
+| Total Latency | The total time taken from when the client request was received by Azure Front Door until the last response byte was sent from Azure Front Door to the client. |Endpoint, Client Country, Client Region, HTTP Status, HTTP Status Group |
+| Web Application Firewall Request Count | The number of requests processed by the Azure Front Door web application firewall. | Action, Policy Name, Rule Name |
+
+> [!NOTE]
+> If a request to the origin times out, the value of the *Http Status* dimension is **0**.
+
+## Logs
+
+Logs track all requests that pass through Azure Front Door. It can take a few minutes for logs to be processed and stored.
+
+There are multiple Front Door logs, which you can use for different purposes:
+
+- [Access logs](#access-log) can be used to identify slow requests, determine error rates, and understand how Front Door's caching behavior is working for your solution.
+- Web application firewall (WAF) logs can be used to detect potential attacks, and false positive detections that might indicate legitimate requests that the WAF blocked. For more information on the WAF logs, see [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md).
+- [Health probe logs](#health-probe-log) can be used to identify origins that are unhealthy or that don't respond to requests from some of Front Door's geographically distributed PoPs.
+- [Activity logs](#activity-logs) provide visibility into the operations performed on your Azure resources, such as configuration changes to your Azure Front Door profile.
+
+The activity log and web application firewall log includes a *tracking reference*, which is also propagated in requests to origins and to client responses by using the `X-Azure-Ref` header. You can use the tracking reference to gain an end-to-end view of your application request processing.
+
+Access logs, health probe logs, and WAF logs aren't enabled by default. To enable and store your diagnostic logs, see [Configure Azure Front Door logs](./standard-premium/how-to-logs.md). Activity log entries are collected by default, and you can view them in the Azure portal.
+
+## <a name="access-log"></a>Access log
+
+Information about every request is logged into the access log. Each access log entry contains the information listed in the following table.
+
+| Property | Description |
+|-|-|
+| TrackingReference | The unique reference string that identifies a request served by Azure Front Door. The tracking reference is sent to the client and to the origin by using the `X-Azure-Ref` headers. Use the tracking reference when searching for a specific request in the access or WAF logs. |
+| Time | The date and time when the Azure Front Door edge delivered requested contents to client (in UTC). |
+| HttpMethod | HTTP method used by the request: DELETE, GET, HEAD, OPTIONS, PATCH, POST, or PUT. |
+| HttpVersion | The HTTP version that the client specified in the request. |
+| RequestUri | The URI of the received request. This field contains the full scheme, port, domain, path, and query string. |
+| HostName | The host name in the request from client. If you enable custom domains and have wildcard domain (`*.contoso.com`), the HostName log field's value is `subdomain-from-client-request.contoso.com`. If you use the Azure Front Door domain (`contoso-123.z01.azurefd.net`), the HostName log field's value is `contoso-123.z01.azurefd.net`. |
+| RequestBytes | The size of the HTTP request message in bytes, including the request headers and the request body. |
+| ResponseBytes | The size of the HTTP response message in bytes. |
+| UserAgent | The user agent that the client used. Typically, the user agent identifies the browser type. |
+| ClientIp | The IP address of the client that made the original request. If there was an `X-Forwarded-For` header in the request, then the client IP address is taken from the header. |
+| SocketIp | The IP address of the direct connection to the Azure Front Door edge. If the client used an HTTP proxy or a load balancer to send the request, the value of SocketIp is the IP address of the proxy or load balancer. |
+| timeTaken | The length of time from when the Azure Front Door edge received the client's request to the time that Azure Front Door sent the last byte of the response to the client, in seconds. This field doesn't take into account network latency and TCP buffering. |
+| RequestProtocol | The protocol that the client specified in the request. Possible values include: **HTTP**, **HTTPS**. |
+| SecurityProtocol | The TLS/SSL protocol version used by the request, or null if the request didn't use encryption. Possible values include: **SSLv3**, **TLSv1**, **TLSv1.1**, **TLSv1.2**. |
+| SecurityCipher | When the value for the request protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and Azure Front Door. |
+| Endpoint | The domain name of the Azure Front Door endpoint, such as `contoso-123.z01.azurefd.net`. |
+| HttpStatusCode | The HTTP status code returned from Azure Front Door. If the request to the origin timed out, the value for the HttpStatusCode field is **0**. If the client closed the connection, the value for the HttpStatusCode field is **499**. |
+| Pop | The Azure Front Door edge point of presence (PoP) that responded to the user request. |
+| Cache Status | How the request was handled by the Azure Front Door cache. Possible values are: <ul><li>**HIT** and **REMOTE_HIT**: The HTTP request was served from the Azure Front Door cache.</li><li>**MISS**: The HTTP request was served from origin. </li><li> **PARTIAL_HIT**: Some of the bytes were served from the Front Door edge PoP cache, and other bytes were served from the origin. This status indicates an [object chunking](./front-door-caching.md#delivery-of-large-files) scenario. </li><li> **CACHE_NOCONFIG**: The request was forwarded without caching settings, including bypass scenarios. </li><li> **PRIVATE_NOSTORE**: There was no cache configured in the caching settings by the customer. </li><li> **N/A**: The request was denied by a signed URL or the Rules Engine.</li></ul> |
+| MatchedRulesSetName | The names of the Rules Engine rules that were processed. |
+| RouteNameΓÇ»| The name of the route that the request matched. |
+| ClientPort | The IP port of the client that made the request. |
+| Referrer | The URL of the site that originated the request. |
+| TimetoFirstByte | The length of time, in seconds, from when the Azure Front Door edge received the request to the time the first byte was sent to client, as measured by Azure Front Door. This property doesn't measure the client data. |
+| ErrorInfo | If an error occurred during the processing of the request, this field provides detailed information about the error. Possible values are: <ul><li> **NoError**: Indicates no error was found. </li><li> **CertificateError**: Generic SSL certificate error. </li><li> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match the requested URL. </li><li> **ClientDisconnected**: The request failed because of a client network connection issue. </li><li> **ClientGeoBlocked**: The client was blocked due to the geographical location of the IP address. </li><li> **UnspecifiedClientError**: Generic client error. </li><li> **InvalidRequest**: Invalid request. This response indicates a malformed header, body, or URL. </li><li> **DNSFailure**: A failured occurred during DNS resolution. </li><li> **DNSTimeout**: The DNS query to resolve the origin IP address timed out. </li><li> **DNSNameNotResolved**: The server name or address couldn't be resolved. </li><li> **OriginConnectionAborted**: The connection with the origin was disconnected abnormally. </li><li> **OriginConnectionError**: Generic origin connection error. </li><li> **OriginConnectionRefused**: The connection with the origin wasn't established. </li><li> **OriginError**: Generic origin error. </li><li> **OriginInvalidRequest**: An invalid request was sent to the origin. </li><li> **ResponseHeaderTooBig**: The origin returned a too large of a response header. </li><li> **OriginInvalidResponse**: The origin returned an invalid or unrecognized response. </li><li> **OriginTimeout**: The timeout period for the origin request expired. </li><li> **ResponseHeaderTooBig**: The origin returned a too large of a response header. </li><li> **RestrictedIP**: The request was blocked because of restricted IP address. </li><li> **SSLHandshakeError**: Azure Front Door was unable to establish a connection with the origin because of an SSL handshake failure. </li><li> **SSLInvalidRootCA**: The root certification authority's certificate was invalid. </li><li> **SSLInvalidCipher**: The HTTPS connection was established using an invalid cipher. </li><li> **OriginConnectionAborted**: The connection with the origin was disconnected abnormally. </li><li> **OriginConnectionRefused**: The connection with the origin wasn't established. </li><li> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. </li></ul> |
+| OriginURL | The full URL of the origin where the request was sent. The URL is composed of the scheme, host header, port, path, and query string. <br> **URL rewrite**: If the request URL was rewritten by the Rules Engine, the path refers to the rewritten path. <br> **Cache on edge PoP**: If the request was served from the Azure Front Door cache, the origin is **N/A**. <br> **Large request**: If the requested content is large and there are multiple chunked requests going back to the origin, this field corresponds to the first request to the origin. For more information, see [Object Chunking](./front-door-caching.md#delivery-of-large-files). |
+| OriginIP | The IP address of the origin that served the request. <br> **Cache on edge PoP**: If the request was served from the Azure Front Door cache, the origin is **N/A**. <br> **Large request**: If the requested content is large and there are multiple chunked requests going back to the origin, this field corresponds to the first request to the origin. For more information, see [Object Chunking](./front-door-caching.md#delivery-of-large-files). |
+| OriginName| The full hostname (DNS name) of the origin. <br> **Cache on edge PoP**: If the request was served from the Azure Front Door cache, the origin is **N/A**. <br> **Large request**: If the requested content is large and there are multiple chunked requests going back to the origin, this field corresponds to the first request to the origin. For more information, see [Object Chunking](./front-door-caching.md#delivery-of-large-files). |
+
+## Health probe log
+
+Azure Front Door logs every failed health probe request. These logs can help you to diagnose problems with an origin. The logs provide you with information that you can use to investigate the failure reason and then bring the origin back to a healthy status.
+
+Some scenarios this log can be useful for are:
+
+- You noticed Azure Front Door traffic was sent to a subset of the origins. For example, you might have noticed that only three out of four origins receive traffic. You want to know if the origins are receiving and responding to health probes so you know whether the origins are healthy.
+- You noticed the origin health percentage metric is lower than you expected. You want to know which origins are recorded as unhealthy and the reason for the health probe failures.
+
+Each health probe log entry has the following schema:
+
+| Property | Description |
+| | |
+| HealthProbeId | A unique ID to identify the health probe request. |
+| Time | The date and time when the health probe was sent (in UTC). |
+| HttpMethod | The HTTP method used by the health probe request. Values include **GET** and **HEAD**, based on the health probe's configuration. |
+| Result | The status of health probe. The value is either **success** or a description of the error the probe received. |
+| HttpStatusCode | The HTTP status code returned by the origin. |
+| ProbeURL | The full target URL to where the probe request was sent. The URL is composed of the scheme, host header, path, and query string. |
+| OriginName | The name of the origin that the health probe was sent to. This field helps you to locate origins of interest if origin is configured to use an FDQN. |
+| POP | The edge PoP that sent the probe request. |
+| Origin IP | The IP address of the origin that the health probe was sent to. |
+| TotalLatency | The time from when the Azure Front Door edge sent the health probe request to the origin to when the origin sent the last response to Azure Front Door. |
+| ConnectionLatency| The time spent setting up the TCP connection to send the HTTP probe request to the origin. |
+| DNSResolution Latency | The time spent on DNS resolution. This field only has a value if the origin is configured to be an FDQN instead of an IP address. If the origin is configured to use an IP address, the value is **N/A**. |
+
+The following example JSON snippet shows a health probe log entry for a failed health probe request.
+
+```json
+{
+ "records": [
+ {
+ "time": "2021-02-02T07:15:37.3640748Z",
+ "resourceId": "/SUBSCRIPTIONS/27CAFCA8-B9A4-4264-B399-45D0C9CCA1AB/RESOURCEGROUPS/AFDXPRIVATEPREVIEW/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDXPRIVATEPREVIEW-JESSIE",
+ "category": "FrontDoorHealthProbeLog",
+ "operationName": "Microsoft.Cdn/Profiles/FrontDoorHealthProbeLog/Write",
+ "properties": {
+ "healthProbeId": "9642AEA07BA64675A0A7AD214ACF746E",
+ "POP": "MAA",
+ "httpVerb": "HEAD",
+ "result": "OriginError",
+ "httpStatusCode": "400",
+ "probeURL": "http://afdxprivatepreview.blob.core.windows.net:80/",
+ "originName": "afdxprivatepreview.blob.core.windows.net",
+ "originIP": "52.239.224.228:80",
+ "totalLatencyMilliseconds": "141",
+ "connectionLatencyMilliseconds": "68",
+ "DNSLatencyMicroseconds": "1814"
+ }
+ }
+ ]
+}
+```
+
+## Web application firewall log
+
+For more information on the Front Door web application firewall (WAF) logs, see [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md).
+
+## Activity logs
+
+Activity logs provide information about the management operations on your Azure Front Door resources. The logs include details about each write operation that was performed on an Azure Front Door resource, including when the operation occurred, who performed it, and what the operation was.
+
+> [!NOTE]
+> Activity logs don't include read operations. They also might not include all operations that you perform by using either the Azure portal or classic management APIs.
+
+For more information, see [View your activity logs](./standard-premium/how-to-logs.md#view-your-activity-logs).
+
+## Next steps
+
+To enable and store your diagnostic logs, see [Configure Azure Front Door logs](./standard-premium/how-to-logs.md).
++ When using Azure Front Door (classic), you can monitor resources in the following ways:
Metrics are a feature for certain Azure resources that allow you to view perform
| BackendHealthPercentage | Backend Health Percentage | Percent | Backend</br>BackendPool | The percentage of successful health probes from Front Door to backends. | | WebApplicationFirewallRequestCount | Web Application Firewall Request Count | Count | PolicyName</br>RuleName</br>Action | The number of client requests processed by the application layer security of Front Door. |
-> [!NOTE]
-> Activity log doesn't include any GET operations or operations that you perform by using either the Azure portal or the original Management API.
->
- ## <a name="activity-log"></a>Activity logs Activity logs provide information about the operations done on an Azure Front Door (classic) profile. They also determine the what, who, and when for any write operations (put, post, or delete) done against an Azure Front Door (classic) profile. >[!NOTE]
->If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.
+>If a request to the the origin times out, the value for HttpStatusCode is set to **0**.
Access activity logs in your Front Door or all the logs of your Azure resources in Azure Monitor. To view activity logs: 1. Select your Front Door instance.
-2. Select **Activity log**.
+
+1. Select **Activity log**.
:::image type="content" source="./media/front-door-diagnostics/activity-log.png" alt-text="Activity log":::
-3. Choose a filtering scope, and then select **Apply**.
+1. Choose a filtering scope, and then select **Apply**.
+
+> [!NOTE]
+> Activity log doesn't include any GET operations or operations that you perform by using either the Azure portal or the original Management API.
+>
## <a name="diagnostic-logging"></a>Diagnostic logs+ Diagnostic logs provide rich information about operations and errors that are important for auditing and troubleshooting. Diagnostic logs differ from activity logs. Activity logs provide insights into the operations done on Azure resources. Diagnostic logs provide insight into operations that your resource has done. For more information, see [Azure Monitor diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md).
To configure diagnostic logs for your Azure Front Door (classic):
1. Select your Azure Front Door (classic) profile.
-2. Choose **Diagnostic settings**.
+1. Choose **Diagnostic settings**.
-3. Select **Turn on diagnostics**. Archive diagnostic logs along with metrics to a storage account, stream them to an event hub, or send them to Azure Monitor logs.
+1. Select **Turn on diagnostics**. Archive diagnostic logs along with metrics to a storage account, stream them to an event hub, or send them to Azure Monitor logs.
Front Door currently provides diagnostic logs. Diagnostic logs provide individual API requests with each entry having the following schema:
Front Door currently provides diagnostic logs. Diagnostic logs provide individua
| ClientIp | The IP address of the client that made the request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the same. | | ClientPort | The IP port of the client that made the request. | | HttpMethod | HTTP method used by the request. |
-| HttpStatusCode | The HTTP status code returned from the proxy. If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.|
+| HttpStatusCode | The HTTP status code returned from the proxy. If a request to the origin times out, the value for HttpStatusCode is set to **0**.|
| HttpStatusDetails | Resulting status on the request. Meaning of this string value can be found at a Status reference table. | | HttpVersion | Type of the request or connection. | | POP | Short name of the edge where the request landed. |
Front Door currently provides diagnostic logs. Diagnostic logs provide individua
| TimeTaken | The length of time from first byte of request into Front Door to last byte of response out, in seconds. | | TrackingReference | The unique reference string that identifies a request served by Front Door, also sent as X-Azure-Ref header to the client. Required for searching details in the access logs for a specific request. | | UserAgent | The browser type that the client used. |
-| ErrorInfo | This field contains the specific type of error for further troubleshooting. </br> Possible values include: </br> **NoError**: Indicates no error was found. </br> **CertificateError**: Generic SSL certificate error.</br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. </br> **ClientDisconnected**: Request failure because of client network connection. </br> **UnspecifiedClientError**: Generic client error. </br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. </br> **DNSFailure**: DNS Failure. </br> **DNSNameNotResolved**: The server name or address couldn't be resolved. </br> **OriginConnectionAborted**: The connection with the origin was stopped abruptly. </br> **OriginConnectionError**: Generic origin connection error. </br> **OriginConnectionRefused**: The connection with the origin wasn't able to established. </br> **OriginError**: Generic origin error. </br> **OriginInvalidResponse**: Origin returned an invalid or unrecognized response. </br> **OriginTimeout**: The timeout period for origin request expired. </br> **ResponseHeaderTooBig**: The origin returned too large of a response header. </br> **RestrictedIP**: The request was blocked because of restricted IP. </br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. </br> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. </br> **SSLMismatchedSNI**:The request was invalid because the HTTP message header did not match the value presented in the TLS SNI extension during SSL/TLS connection setup.|
+| ErrorInfo | This field contains the specific type of error for further troubleshooting. </br> Possible values include: </br> **NoError**: Indicates no error was found. </br> **CertificateError**: Generic SSL certificate error.</br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. </br> **ClientDisconnected**: Request failure because of client network connection. </br> **UnspecifiedClientError**: Generic client error. </br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. </br> **DNSFailure**: DNS Failure. </br> **DNSNameNotResolved**: The server name or address couldn't be resolved. </br> **OriginConnectionAborted**: The connection with the origin was stopped abruptly. </br> **OriginConnectionError**: Generic origin connection error. </br> **OriginConnectionRefused**: The connection with the origin wasn't able to established. </br> **OriginError**: Generic origin error. </br> **OriginInvalidResponse**: Origin returned an invalid or unrecognized response. </br> **OriginTimeout**: The timeout period for origin request expired. </br> **ResponseHeaderTooBig**: The origin returned too large of a response header. </br> **RestrictedIP**: The request was blocked because of restricted IP. </br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. </br> **UnspecifiedError**: An error occurred that didnΓÇÖt fit in any of the errors in the table. </br> **SSLMismatchedSNI**:The request was invalid because the HTTP message header didn't match the value presented in the TLS SNI extension during SSL/TLS connection setup.|
### Sent to origin shield deprecation+ The raw log property **isSentToOriginShield** has been deprecated and replaced by a new field **isReceivedFromClient**. Use the new field if you're already using the deprecated field. Raw logs include logs generated from both CDN edge (child POP) and origin shield. Origin shield refers to parent nodes that are strategically located across the globe. These nodes communicate with origin servers and reduce the traffic load on origin.
-For every request that goes to origin shield, there are 2-log entries:
+For every request that goes to an origin shield, there are two log entries:
* One for edge nodes * One for origin shield.
If the value is false, then it means the request is responded from origin shield
| where Category == "FrontdoorAccessLog" and isReceivedFromClient_b == true` > [!NOTE]
-> For various routing configurations and traffic behaviors, some of the fields like backendHostname, cacheStatus, isReceivedFromClient, and POP field may respond with different values. The below table explains the different values these fields will have for various scenarios:
+> For various routing configurations and traffic behaviors, some of the fields like backendHostname, cacheStatus, isReceivedFromClient, and POP field may respond with different values. The following table explains the different values these fields will have for various scenarios:
| Scenarios | Count of log entries | POP | BackendHostname | isReceivedFromClient | CacheStatus | | - | - | - | - | - | - |
After the chunk arrives at the Azure Front Door edge, it's cached and immediatel
- Learn how to [create an Azure Front Door (classic) profile](quickstart-create-front-door.md) - Learn [how Azure Front Door (classic) works](front-door-routing-architecture.md)+
frontdoor Front Door Rules Engine Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine-actions.md
Use these settings to control how files get cached for requests that contain que
| Cache behavior | Description | | -- | |
-| Ignore query strings | Once the asset is cached, all ensuing requests ignore the query strings until the cached asset expires. |
-| Cache every unique URL | Each request with a unique URL, including the query string, is treated as a unique asset with its own cache. |
-| Ignore specified query strings | Request URL query strings listed in "Query parameters" setting are ignored for caching. |
-| Include specified query strings | Request URL query strings listed in "Query parameters" setting are used for caching. |
+| Ignore Query String | Once the asset is cached, all ensuing requests ignore the query strings until the cached asset expires. |
+| Use Query String | Each request with a unique URL, including the query string, is treated as a unique asset with its own cache. |
+| Ignore Specified Query Strings | Request URL query strings listed in "Query parameters" setting are ignored for caching. |
+| Include Specified Query Strings | Request URL query strings listed in "Query parameters" setting are used for caching. |
| Additional fields | Description |
frontdoor Scenario Storage Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/scenario-storage-blobs.md
As a content delivery network (CDN), Front Door caches the content at its global
#### Authentication
-Front Door is designed to be internet-facing, and this scenario is optimized for publicly available blobs. If you need to authenticate access to blobs, consider using [shared access signatures](../storage/common/storage-sas-overview.md), and ensure that you enable the [*Cache every unique URL* query string behavior](front-door-caching.md#query-string-behavior) to avoid Front Door from serving requests to unauthenticated clients. However, this approach might not make effective use of the Front Door cache, because each request with a different shared access signature must be sent to the origin separately.
+Front Door is designed to be internet-facing, and this scenario is optimized for publicly available blobs. If you need to authenticate access to blobs, consider using [shared access signatures](../storage/common/storage-sas-overview.md), and ensure that you enable the [*Use Query String* query string behavior](front-door-caching.md#query-string-behavior) to avoid Front Door from serving requests to unauthenticated clients. However, this approach might not make effective use of the Front Door cache, because each request with a different shared access signature must be sent to the origin separately.
#### Origin security
frontdoor How To Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-logs.md
Title: 'Logs - Azure Front Door'
-description: This article explains how Azure Front Door tracks and monitor your environment with logs.
+description: This article explains how to configure Azure Front Door logs.
Previously updated : 01/16/2023 Last updated : 02/23/2023
-# Azure Front Door logs
+# Configure Azure Front Door logs
-Azure Front Door provides different logging to help you track, monitor, and debug your Front Door.
+Azure Front Door captures several types of logs. Logs can help you monitor your application, track requests, and debug your Front Door configuration. For more information about Azure Front Door's logs, see [Monitor metrics and logs in Azure Front Door](../front-door-diagnostics.md).
-* Access logs have detailed information about every request that AFD receives and help you analyze and monitor access patterns, and debug issues.
-* Activity logs provide visibility into the operations done on Azure resources.
-* Health probe logs provide the logs for every failed probe to your origin.
-* Web Application Firewall (WAF) logs provide detailed information of requests that gets logged through either detection or prevention mode of an Azure Front Door endpoint. A custom domain that gets configured with WAF can also be viewed through these logs. For more information on WAF logs, see [Azure Web Application Firewall monitoring and logging](../../web-application-firewall/afds/waf-front-door-monitor.md#waf-logs).
-
-Access logs, health probe logs and WAF logs aren't enabled by default. Use the steps below to enable logging. Activity log entries are collected by default, and you can view them in the Azure portal. Logs can have delays up to a few minutes.
-
-You have three options for storing your logs:
-
-* **Storage account:** Storage accounts are best used for scenarios when logs are stored for a longer duration and reviewed when needed.
-* **Event hubs:** Event hubs are a great option for integrating with other security information and event management (SIEM) tools or external data stores. For example: Splunk/DataDog/Sumo.
-* **Azure Log Analytics:** Azure Log Analytics in Azure Monitor is best used for general real-time monitoring and analysis of Azure Front Door performance.
+Access logs, health probe logs, and WAF logs aren't enabled by default. In this article, you'll learn how to enable diagnostic logs for your Azure Front Door profile.
## Configure logs
You have three options for storing your logs:
1. Select theΓÇ»**Destination details**. Destination options are: * **Send to Log Analytics**
- * Select the *Subscription* and *Log Analytics workspace*.
+ * Azure Log Analytics in Azure Monitor is best used for general real-time monitoring and analysis of Azure Front Door performance.
+ * Select the *Subscription* and *Log Analytics workspace*.
* **Archive to a storage account**
- * Select the *Subscription* and the *Storage Account*. and set **Retention (days)**.
+ * Storage accounts are best used for scenarios when logs are stored for a longer duration and are reviewed when needed.
+ * Select the *Subscription* and the *Storage Account*. and set **Retention (days)**.
* **Stream to an event hub**
- * Select the *Subscription, Event hub namespace, Event hub name (optional)*, and *Event hub policy name*.
+ * Event hubs are a great option for integrating with other security information and event management (SIEM) tools or external data stores, such as Splunk, DataDog, or Sumo.
+ * Select the *Subscription, Event hub namespace, Event hub name (optional)*, and *Event hub policy name*.
+
+ > [!TIP]
+ > Most Azure customers use Log Analytics.
:::image type="content" source="../media/how-to-logging/front-door-logging-2.png" alt-text="Screenshot of diagnostic settings page."::: 1. Click on **Save**.
-## Access log
-
-Azure Front Door currently provides individual API requests with each entry having the following schema and logged in JSON format as shown below.
-
-| Property | Description |
-|-|-|
-| TrackingReference | The unique reference string that identifies a request served by AFD, also sent as X-Azure-Ref header to the client. Required for searching details in the access logs for a specific request. |
-| Time | The date and time when the AFD edge delivered requested contents to client (in UTC). |
-| HttpMethod | HTTP method used by the request: DELETE, GET, HEAD, OPTIONS, PATCH, POST, or PUT. |
-| HttpVersion | The HTTP version that the viewer specified in the request. |
-| RequestUri | URI of the received request. This field is a full scheme, port, domain, path, and query string |
-| HostName | The host name in the request from client. If you enable custom domains and have wildcard domain (*.contoso.com), hostname is a.contoso.com. if you use Azure Front Door domain (contoso.azurefd.net), hostname is contoso.azurefd.net. |
-| RequestBytes | The size of the HTTP request message in bytes, including the request headers and the request body. The number of bytes of data that the viewer included in the request, including headers. |
-| ResponseBytes | Bytes sent by the backend server as the response. |
-| UserAgent | The browser type that the client used. |
-| ClientIp | The IP address of the client that made the original request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the same. |
-| SocketIp | The IP address of the direct connection to AFD edge. If the client used an HTTP proxy or a load balancer to send the request, the value of SocketIp is the IP address of the proxy or load balancer. |
-| timeTaken | The length of time from the time AFD edge server receives a client's request to the time that AFD sends the last byte of response to client, in seconds. This field doesn't take into account network latency and TCP buffering. |
-| RequestProtocol | The protocol that the client specified in the request: HTTP, HTTPS. |
-| SecurityProtocol | The TLS/SSL protocol version used by the request or null if no encryption. Possible values include: SSLv3, TLSv1, TLSv1.1, TLSv1.2 |
-| SecurityCipher | When the value for Request Protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and AFD for encryption. |
-| Endpoint | The domain name of AFD endpoint, for example, contoso.z01.azurefd.net |
-| HttpStatusCode | The HTTP status code returned from Azure Front Door. If a request to the origin times out, the value for HttpStatusCode is set to **0**.|
-| Pop | The edge pop, which responded to the user request. |
-| Cache Status | Provides the status code of how the request gets handled by the CDN service when it comes to caching. Possible values are:<ul><li>`HIT` and `REMOTE_HIT`: The HTTP request was served from the Front Door cache.</li><li>`MISS`: The HTTP request was served from the origin.</li><li> `PARTIAL_HIT`: Some of the bytes from a request were served from the Front Door cache, and some of the bytes were served from origin. This status occurs in [object chunking](../front-door-caching.md#delivery-of-large-files) scenarios.</li><li>`CACHE_NOCONFIG`: Request was forwarded without caching settings, including bypass scenario.</li><li>`PRIVATE_NOSTORE`: No cache configured in caching settings by customers.</li><li>`N/A`: The request was denied by a signed URL or the rules engine.</li></ul> |
-| MatchedRulesSetName | The names of the rules that were processed. |
-| RouteNameΓÇ»| The name of the route that the request matched. |
-| ClientPort | The IP port of the client that made the request. |
-| Referrer | The URL of the site that originated the request. |
-| TimeToFirstByte | The length of time in seconds from AFD receives the request to the time the first byte gets sent to client, as measured on Azure Front Door. This property doesn't measure the client data. |
-| ErrorInfo | This field provides detailed info of the error token for each response. Possible values are:<ul><li>`NoError`: Indicates no error was found.</li><li>`CertificateError`: Generic SSL certificate error.</li><li>`CertificateNameCheckFailed`: The host name in the SSL certificate is invalid or doesn't match.</li><li>`ClientDisconnected`: Request failure because of client network connection.</li><li>`ClientGeoBlocked`: The client was blocked due geographical location of the IP.</li><li>`UnspecifiedClientError`: Generic client error.</li><li>`InvalidRequest`: Invalid request. It might occur because of malformed header, body, and URL.</li><li>`DNSFailure`: DNS Failure.</li><li>`DNSTimeout`: The DNS query to resolve the backend timed out.</li><li>`DNSNameNotResolved`: The server name or address couldn't be resolved.</li><li>`OriginConnectionAborted`: The connection with the origin was disconnected abnormally.</li><li>`OriginConnectionError`: Generic origin connection error.</li><li>`OriginConnectionRefused`: The connection with the origin wasn't established.</li><li>`OriginError`: Generic origin error.</li><li>`OriginInvalidRequest`: An invalid request was sent to the origin.</li><li>`ResponseHeaderTooBig`: The origin returned a too large of a response header.</li><li>`OriginInvalidResponse`:` Origin returned an invalid or unrecognized response.</li><li>`OriginTimeout`: The timeout period for origin request expired.</li><li>`ResponseHeaderTooBig`: The origin returned a too large of a response header.</li><li>`RestrictedIP`: The request was blocked because of restricted IP.</li><li>`SSLHandshakeError`: Unable to establish connection with origin because of SSL hand shake failure.</li><li>`SSLInvalidRootCA`: The RootCA was invalid.</li><li>`SSLInvalidCipher`: Cipher was invalid for which the HTTPS connection was established.</li><li>`OriginConnectionAborted`: The connection with the origin was disconnected abnormally.</li><li>`OriginConnectionRefused`: The connection with the origin wasn't established.</li><li>`UnspecifiedError`: An error occurred that didnΓÇÖt fit in any of the errors in the table.</li></ul> |
-| OriginURL | The full URL of the origin where requests are being sent. Composed of the scheme, host header, port, path, and query string. <br> **URL rewrite**: If there's a URL rewrite rule in Rule Set, path refers to rewritten path. <br> **Cache on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files). |
-| OriginIP | The origin IP that served the request. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files) |
-| OriginName| The full DNS name (hostname in origin URL) to the origin. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files) |
-
-## Health Probe Log
-
-Health probe logs provide logging for every failed probe to help you diagnose your origin. The logs will provide you information that you can use to bring the origin back to service. Some scenarios this log can be useful for are:
-
-* You noticed Azure Front Door traffic was sent to some of the origins. For example, only three out of four origins receiving traffic. You want to know if the origins are receiving probes and if not the reason for the failure. 
-
-* You noticed the origin health % is lower than expected and want to know which origin failed and the reason of the failure.
-
-### Health probe log properties
-
-Each health probe log has the following schema.
-
-| Property | Description |
-| | |
-| HealthProbeId | A unique ID to identify the request. |
-| Time | Probe complete time |
-| HttpMethod | HTTP method used by the health probe request. Values include GET and HEAD, based on health probe configurations. |
-| Result | Status of health probe to origin, value includes success, and other error text. |
-| HttpStatusCode | The HTTP status code returned from the origin. |
-| ProbeURL (target) | The full URL of the origin where requests are being sent. Composed of the scheme, host header, path, and query string. |
-| OriginName | The origin where requests are being sent. This field helps locate origins of interest if origin is configured to FDQN. |
-| POP | The edge pop, which sent out the probe request. |
-| Origin IP | Target origin IP. This field is useful in locating origins of interest if you configure origin using FDQN. |
-| TotalLatency | The time from AFDX edge sends the request to origin to the time origin sends the last response to AFDX edge. |
-| ConnectionLatency| Duration Time spent on setting up the TCP connection to send the HTTP Probe request to origin. |
-| DNSResolution Latency | Duration Time spent on DNS resolution if the origin is configured to be an FDQN instead of IP. N/A if the origin is configured to IP. |
-
-The following example shows a health probe log entry, in JSON format.
-
-```json
-{
- "records": [
- {
- "time": "2021-02-02T07:15:37.3640748Z",
- "resourceId": "/SUBSCRIPTIONS/27CAFCA8-B9A4-4264-B399-45D0C9CCA1AB/RESOURCEGROUPS/AFDXPRIVATEPREVIEW/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDXPRIVATEPREVIEW-JESSIE",
- "category": "FrontDoorHealthProbeLog",
- "operationName": "Microsoft.Cdn/Profiles/FrontDoorHealthProbeLog/Write",
- "properties": {
- "healthProbeId": "9642AEA07BA64675A0A7AD214ACF746E",
- "POP": "MAA",
- "httpVerb": "HEAD",
- "result": "OriginError",
- "httpStatusCode": "400",
- "probeURL": "http://afdxprivatepreview.blob.core.windows.net:80/",
- "originName": "afdxprivatepreview.blob.core.windows.net",
- "originIP": "52.239.224.228:80",
- "totalLatencyMilliseconds": "141",
- "connectionLatencyMilliseconds": "68",
- "DNSLatencyMicroseconds": "1814"
- }
- }
- ]
-}
-```
-
-## Activity logs
-
-Activity logs provide information about the operations done on Azure Front Door Standard/Premium. The logs include details about what, who and when a write operation was done on Azure Front Door.
-
-> [!NOTE]
-> Activity logs don't include GET operations. They also don't include operations that you perform by using either the Azure portal or the original Management API.
-
-Access activity logs in your Front Door or all the logs of your Azure resources in Azure Monitor.
+## View your activity logs
To view activity logs:
frontdoor How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-monitor-metrics.md
Previously updated : 03/20/2022 Last updated : 02/23/2023
-# Real-time Monitoring in Azure Front Door
+# Real-time monitoring in Azure Front Door
-Azure Front Door is integrated with Azure Monitor and has 11 metrics to help monitor Azure Front Door in real-time to track, troubleshoot, and debug issues.
+Azure Front Door is integrated with Azure Monitor. You can use metrics in real time to measure traffic to your application, and to track, troubleshoot, and debug issues.
-Azure Front Door measures and sends its metrics in 60-second intervals. The metrics can take up to 3 mins to appear in the portal. Metrics can be displayed in charts or grid of your choice and are accessible via portal, PowerShell, CLI, and API. For more information, seeΓÇ»[Azure Monitor metrics](../../azure-monitor/essentials/data-platform-metrics.md).
+You can also configure alerts for each metric such as a threshold for 4XXErrorRate or 5XXErrorRate. When the error rate exceeds the threshold, it will trigger an alert as configured. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/alerts/alerts-metric.md).
-The default metrics are free of charge. You can enable additional metrics for an extra cost.
+## Access metrics in the Azure portal
-You can configure alerts for each metric such as a threshold for 4XXErrorRate or 5XXErrorRate. When the error rate exceeds the threshold, it will trigger an alert as configured. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/alerts/alerts-metric.md).
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile.
-## Metrics supported in Azure Front Door
+1. Under **Monitoring**, select **Metrics**.
-| Metrics | Description | Dimensions |
-| - | - | - |
-| Bytes Hit ratio | The percentage of egress from AFD cache, computed against the total egress.ΓÇ»</br> **Byte Hit Ratio** = (egress from edge - egress from origin)/egress from edge. </br> **Scenarios excluded in bytes hit ratio calculation**:</br> 1. You explicitly configure no cache either through Rules Engine or Query String caching behavior. </br> 2. You explicitly configure cache-control directive with no-store or private cache. </br>3. Byte hit ratio can be low if most of the traffic is forwarded to origin rather than served from caching based on your configurations or scenarios. | Endpoint |
-| RequestCount | The number of client requests served by CDN. | Endpoint, client country, client region, HTTP status, HTTP status group |
-| ResponseSize | The number of bytes sent as responses from Front Door to clients. |Endpoint, client country, client region, HTTP status, HTTP status group |
-| TotalLatency | The total time from the client request received by CDN **until the last response byte send from CDN to client**. |Endpoint, client country, client region, HTTP status, HTTP status group |
-| RequestSize | The number of bytes sent as requests from clients to AFD. | Endpoint, client country, client region, HTTP status, HTTP status group |
-| 4XX % ErrorRate | The percentage of all the client requests for which the response status code is 4XX. | Endpoint, Client Country, Client Region |
-| 5XX % ErrorRate | The percentage of all the client requests for which the response status code is 5XX. | Endpoint, Client Country, Client Region |
-| OriginRequestCount | The number of requests sent from AFD to origin | Endpoint, Origin, HTTP status, HTTP status group |
-| OriginLatency | The time calculated from when the request was sent by AFD edge to the backend until AFD received the last response byte from the backend. | Endpoint, Origin |
-| OriginHealth% | The percentage of successful health probes from AFD to origin.| Origin, Origin Group |
-| WAF request count | Matched WAF request. | Action, rule name, Policy Name |
-
-> [!NOTE]
-> If a request to the the origin timeout, the value for HttpStatusCode dimension will be **0**.
->
--
-## Access Metrics in Azure portal
-
-1. From the Azure portal menu, select **All Resources** >> **\<your-AFD-profile>**.
-
-2. Under **Monitoring**, select **Metrics**:
-
-3. In **Metrics**, select the metric to add:
+1. In **Metrics**, select the metric to add:
:::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-1.png" alt-text="Screenshot of metrics page." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-1-expanded.png":::
-4. Select **Add filter** to add a filter:
+1. Select **Add filter** to add a filter:
:::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-2.png" alt-text="Screenshot of adding filters to metrics." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-2-expanded.png":::
-5. Select **Apply splitting** to split data by different dimensions:
+1. Select **Apply splitting** to split data by different dimensions:
:::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-4.png" alt-text="Screenshot of adding dimensions to metrics." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-4-expanded.png":::
-6. Select **New chart** to add a new chart:
+1. Select **New chart** to add a new chart:
+
+## Configure alerts in the Azure portal
-## Configure Alerts in Azure portal
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile.
-1. Set up alerts on Azure Front Door Standard/Premium (Preview) by selecting **Monitoring** >> **Alerts**.
+1. Under **Monitoring**, select **Alerts**.
1. Select **New alert rule** for metrics listed in Metrics section.
frontdoor How To Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-reports.md
Previously updated : 03/20/2022 Last updated : 02/23/2023 # Azure Front Door reports
-Azure Front Door analytics reports provide a built-in and all-around view of how your Azure Front Door behaves along with associated Web Application Firewall metrics. You can also take advantage of Access Logs to do further troubleshooting and debugging. Azure Front Door Analytics reports include traffic reports and security reports.
+Azure Front Door analytics reports provide a built-in, all-around view of how your Azure Front Door profile behaves, along with associated web application firewall (WAF) metrics. You can also take advantage of [Azure Front Door's logs](../front-door-diagnostics.md?pivot=front-door-standard-premium) to do further troubleshooting and debugging.
-| Reports | Details |
+The built-in reports include information about your traffic and your application's security. Azure Front Door provides traffic reports and security reports.
+
+| Traffic report | Details |
|||
-| Overview of key metrics | Shows overall data that got sent from Azure Front Door edges to clients<br/>- Peak bandwidth<br/>- Requests <br/>- Cache hit ratio<br/> - Total latency<br/>- 5XX error rate |
-| Traffic by Domain | - Provides an overview of all the domains under the profile<br/>- Breakdown of data transferred out from AFD edge to client<br/>- Total requests<br/>- 3XX/4XX/5XX response code by domains |
-| Traffic by Location | - Shows a map view of request and usage by top countries/regions<br/>- Trend view of top countries/regions |
-| Usage | - Displays data transfer out from Azure Front Door edge to clients<br/>- Data transfer out from origin to AFD edge<br/>- Bandwidth from AFD edge to clients<br/>- Bandwidth from origin to AFD edge<br/>- Requests<br/>- Total latency<br/>- Request count trend by HTTP status code |
-| Caching | - Shows cache hit ratio by request count<br/>- Trend view of hit and miss requests |
-| Top URL | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the most requested 50 assets. |
-| Top Referrer | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the top 50 referrers that generate traffic. |
-| Top User Agent | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the top 50 user agents that were used to request content. |
-
-| Security reports | Details |
+| [Key metrics in all reports](#key-metrics-included-in-all-reports) | Shows overall data that were sent from Azure Front Door edge points of presence (PoPs) to clients, including:<ul><li>Peak bandwidth</li><li>Requests</li><li>Cache hit ratio</li><li>Total latency</li><li>5XX error rate</li></ul> |
+| [Traffic by domain](#traffic-by-domain-report) | Provides an overview of all the domains within your Azure Front Door profile:<ul><li>Breakdown of data transferred out from the Azure Front Door edge to the client</li><li>Total requests</li><li>3XX/4XX/5XX response code by domains</li></ul> |
+| [Traffic by location](#traffic-by-location-report) | <ul><li>Shows a map view of request and usage by top countries/regions<br/></li><li>Trend view of top countries/regions</li></ul> |
+| [Usage](#usage-report) | <ul><li>Data transfer out from Azure Front Door edge to clients<br/></li><li>Data transfer out from origin to Azure Front Door edge<br/></li><li>Bandwidth from Azure Front Door edge to clients<br/></li><li>Bandwidth from origin to Azure Front Door edge<br/></li><li>Requests<br/></li><li>Total latency<br/></li><li>Request count trend by HTTP status code</li></ul> |
+| [Caching](#caching-report) | <ul><li>Shows cache hit ratio by request count<br/></li><li>Trend view of hit and miss requests</li></ul> |
+| [Top URL](#top-url-report) | <ul><li>Shows request count <br/></li><li>Data transferred <br/></li><li>Cache hit ratio <br/></li><li>Response status code distribution for the most requested 50 assets</li></ul> |
+| [Top referrer](#top-referrer-report) | <ul><li>Shows request count <br/></li><li>Data transferred <br/></li><li>Cache hit ratio <br/></li><li>Response status code distribution for the top 50 referrers that generate traffic</li></ul> |
+| [Top user agent](#top-user-agent-report) | <ul><li>Shows request count <br/></li><li>Data transferred <br/></li><li>Cache hit ratio <br/></li><li>Response status code distribution for the top 50 user agents that were used to request content</li></ul> |
+
+| Security report | Details |
|||
-| Overview of key metrics | - Shows matched WAF rules<br/>- Matched OWASP rules<br/>- Matched BOT rules<br/>- Matched custom rules |
-| Metrics by dimensions | - Breakdown of matched WAF rules trend by action<br/>- Doughnut chart of events by Rule Set Type and event by rule group<br/>- Break down list of top events by rule ID, countries/regions, IP address, URL, and user agent |
+| Overview of key metrics | <ul><li>Shows matched WAF rules<br/></li><li>Matched OWASP rules<br/></li><li>Matched bot protection rules<br/></li><li>Matched custom rules</li></ul> |
+| Metrics by dimensions | <ul><li>Breakdown of matched WAF rules trend by action<br/></li><li>Doughnut chart of events by Rule Set Type and event by rule group<br/></li><li>Break down list of top events by rule ID, countries/regions, IP address, URL, and user agent</li></ul> |
> [!NOTE]
-> Security reports is only available with Azure Front Door Premium tier.
+> Security reports are only available when you use the Azure Front Door premium tier.
+
+Reports are free of charge. Most reports are based on access log data, but you don't need to enable access logs or make any configuration changes to use the reports.
+
+## How to access reports
-Most of the reports are based on access logs and are offered free of charge to customers on Azure Front Door. Customer doesnΓÇÖt have to enable access logs or do any configuration to view these reports. Reports are accessible through portal and API. CSV download is also supported.
+Reports are accessible through the Azure portal and through the Azure Resource Manager API. You can also [download reports as comma-separated values (CSV) files](#export-reports-in-csv-format).
Reports support any selected date range from the previous 90 days. With data points of every 5 mins, every hour, or every day based on the date range selected. Normally, you can view data with delay of within an hour and occasionally with delay of up to a few hours.
-## Access Reports using the Azure portal
+### Access reports by using the Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com) and select your Azure Front Door Standard/Premium profile.
-1. In the navigation pane, select **Reports or Security** under *Analytics*.
+1. In the navigation pane, select **Reports** or **Security** under *Analytics*.
:::image type="content" source="../media/how-to-reports/front-door-reports-landing-page.png" alt-text="Screenshot of Reports landing page":::
-1. There are seven tabs for different dimensions, select the dimension of interest.
+1. Select the report you want to view.
* Traffic by domain * Usage
Reports support any selected date range from the previous 90 days. With data poi
* Top referrer * Top user agent
-1. After choosing the dimension, you can select different filters.
+1. After choosing the report, you can select different filters.
- 1. **Show data for** - Select the date range for which you want to view traffic by domain. Available ranges are:
+ - **Show data for:** Select the date range for which you want to view traffic by domain. Available ranges are:
* Last 24 hours * Last 7 days
Reports support any selected date range from the previous 90 days. With data poi
* Last month * Custom date
- By default, data is shown for last seven days. For tabs with line charts, the data granularity goes with the date ranges you selected as the default behavior.
+ By default, data is shown for the last seven days. For reports with line charts, the data granularity goes with the date ranges you selected as the default behavior.
- * 5 minutes - one data point every 5 minutes for date ranges less than or equal 24 hours.
- * By hour ΓÇô one data every hour for date ranges between 24 hours to 30 days
- * By day ΓÇô one data per day for date ranges bigger than 30 days.
+ * 5 minutes - one data point every 5 minutes for date ranges less than or equal to 24 hours. This granularity level can be used for date ranges that are 14 days or shorter.
+ * By hour ΓÇô one data point every hour for date ranges between 24 hours and 30 days.
+ * By day ΓÇô one data point per day for date ranges longer than 30 days.
- You can always use Aggregation to change the default aggregation granularity. Note: 5 minutes doesnΓÇÖt work for data range longer than 14 days.
+ Select **Aggregation** to change the default aggregation granularity.
- 1. **Location** - Select single or multiple client locations by countries/regions. Countries/regions are grouped into six regions: North America, Asia, Europe, Africa, Oceania, and South America. Refer to [countries/regions mapping](https://en.wikipedia.org/wiki/Subregion). By default, all countries are selected.
+ - **Location:** Select one or more countries/regions to filter by the client locations. Countries/regions are grouped into six regions: North America, Asia, Europe, Africa, Oceania, and South America. Refer to [countries/regions mapping](https://en.wikipedia.org/wiki/Subregion). By default, all countries are selected.
:::image type="content" source="../media/how-to-reports/front-door-reports-dimension-locations.png" alt-text="Screenshot of Reports for location dimension.":::
- 1. **Protocol** - Select either HTTP or HTTPS to view traffic data.
+ - **Protocol:** Select either HTTP or HTTPS to view traffic data for the selected protocol.
:::image type="content" source="../media/how-to-reports/front-door-reports-dimension-protocol.png" alt-text="Screenshot of Reports for protocol dimension.":::
- 1. **Domains** - Select single or multi Endpoints or Custom Domains. By default, all endpoints and custom domains are selected.
+ - **Domains** - Select one or more endpoints or custom domains. By default, all endpoints and custom domains are selected.
- * If you delete an endpoint or a custom domain in one profile and then recreate the same endpoint or domain in another profile. The endpoint will be considered a second endpoint.
- * If you're viewing reports by custom domain - when you delete one custom domain and bind it to a different endpoint. They'll be treated as one custom domain. If view by endpoint - they'll be treated as separate items.
+ * If you delete an endpoint or a custom domain in one profile and then recreate the same endpoint or domain in another profile, the report counts the new endpoint as a second endpoint.
+ * If you delete a custom domain and bind it to a different endpoint, the behavior depends on how you view the report. If you view the report by custom domain then they'll be treated as one custom domain. If you view the report by endpoint, they'll be treated as separate items.
:::image type="content" source="../media/how-to-reports/front-door-reports-dimension-domain.png" alt-text="Screenshot of Reports for domain dimension.":::
Reports support any selected date range from the previous 90 days. With data poi
:::image type="content" source="../media/how-to-reports/front-door-reports-download-csv.png" alt-text="Screenshot of download csv file for Reports.":::
-### Key metrics for all reports
+### Export reports in CSV format
-| Metric | Description |
+You can download any of the Azure Front Door reports as a CSV file. Every CSV report includes some general information and the information is available in all CSV files:
+
+| Value | Description |
|||
-| Data Transferred | Shows data transferred from AFD edge POPs to client for the selected time frame, client locations, domains, and protocols. |
-| Peak Bandwidth | Peak bandwidth usage in bits per seconds from Azure Front Door edge POPs to client for the selected time frame, client locations, domains, and protocols. |
-| Total Requests | The number of requests that AFD edge POPs responded to client for the selected time frame, client locations, domains, and protocols. |
-| Cache Hit Ratio | The percentage of all the cacheable requests for which AFD served the contents from its edge caches for the selected time frame, client locations, domains, and protocols. |
-| 5XX Error Rate | The percentage of requests for which the HTTP status code to client was a 5XX for the selected time frame, client locations, domains, and protocols. |
-| Total Latency | Average latency of all the requests for the selected time frame, client locations, domains, and protocols. The latency for each request is measured as the total time of when the client request gets received by Azure Front Door until the last response byte sent from Azure Front Door to client. |
+| Report | The name of the report. |
+| Domains | The list of the endpoints or custom domains for the report. |
+| StartDateUTC | The start of the date range for which you generated the report, in Coordinated Universal Time (UTC). |
+| EndDateUTC | The end of the date range for which you generated the report, in Coordinated Universal Time (UTC). |
+| GeneratedTimeUTC | The date and time when you generated the report, in Coordinated Universal Time (UTC). |
+| Location | The list of the countries/regions where the client requests originated. The value is **All** by default. Not applicable to the *Security* report. |
+| Protocol | The protocol of the request, which is either HTTP or HTTPS. Not applicable to *Top URL*, *Traffic by user agent*, and *Security* reports. |
+| Aggregation | The granularity of data aggregation in each row, every 5 minutes, every hour, and every day. Not applicable to *Traffic by domain*, *Top URL*, *Traffic by user agent* reports, and *Security* reports. |
-## Traffic by Domain
+Each report also includes its own variables. Select a report to view the variables that the report includes.
-Traffic by Domain provides a grid view of all the domains under this Azure Front Door profile. In this report you can view:
-* Requests
-* Data transferred out from Azure Front Door to client
-* Requests with status code (3XX, 4Xx and 5XX) of each domain
+# [Traffic by domain](#tab/traffic-by-domain)
-Domains include Endpoint and Custom Domains, as explained in the Accessing Report session.
+The *Traffic by domain* report includes these fields:
-You can go to other tabs to investigate further or view access log for more information if you find the metrics below your expectation.
+* Domain
+* Total Request
+* Cache Hit Ratio
+* 3XX Requests
+* 4XX Requests
+* 5XX Requests
+* ByteTransferredFromEdgeToClient
+# [Traffic by location](#tab/traffic-by-location)
+The *Traffic by location* report includes these fields:
-## Usage
+* Location
+* TotalRequests
+* Request%
+* BytesTransferredFromEdgeToClient
-This report shows the trends of traffic and response status code by different dimensions, including:
+# [Usage](#tab/usage)
-* Data Transferred from edge to client and from origin to edge in line chart.
+There are three reports in the usage report's CSV file: one for HTTP protocol, one for HTTPS protocol, and one for HTTP status codes.
-* Data Transferred from edge to client by protocol in line chart.
+The *Usage* report's HTTP and HTTPS data sets include these fields:
-* Number of requests from edge to clients in line chart.
+* Time
+* Protocol
+* DataTransferred(bytes)
+* TotalRequest
+* bpsFromEdgeToClient
+* 2XXRequest
+* 3XXRequest
+* 4XXRequest
+* 5XXRequest
-* Number of requests from edge to clients by protocol, HTTP and HTTPS, in line chart.
+The *Usage* report's HTTP status codes data set include these fields:
-* Bandwidth from edge to client in line chart.
+* Time
+* DataTransferred(bytes)
+* TotalRequest
+* bpsFromEdgeToClient
+* 2XXRequest
+* 3XXRequest
+* 4XXRequest
+* 5XXRequest
-* Total latency, which measures the total time from the client request received by Front Door until the last response byte sent from Front Door to client.
+# [Caching](#tab/caching)
-* Number of requests from edge to clients by HTTP status code, in line chart. Every request generates an HTTP status code. HTTP status code appears in HTTPStatusCode in Raw Log. The status code describes how CDN edge handled the request. For example, a 2xx status code indicates that the request got successfully served to a client. While a 4xx status code indicates that an error occurred. For more information about HTTP status codes, see List of HTTP status codes.
+The *Caching* report includes these fields:
-* Number of requests from the edge to clients by HTTP status code. Percentage of requests by HTTP status code among all requests in grid.
+* Time
+* CacheHitRatio
+* HitRequests
+* MissRequests
+# [Top URL](#tab/top-url)
-## Traffic by Location
+The *Top URL* report includes these fields:
-This report displays the top 50 locations by the countries/regions of the visitors that access your asset the most. The report also provides a breakdown of metrics by countries/regions and gives you an overall view of countries/regions
- where the most traffic gets generated. Lastly you can see which countries/regions is having higher cache hit ratio or 4XX/5XX error codes.
+* URL
+* TotalRequests
+* Request%
+* DataTransferred(bytes)
+* DataTransferred%
+# [Top user agent](#tab/topuser-agent)
-The following are included in the reports:
+The *Top user agent* report includes these fields:
-* A world map view of the top 50 countries/regions by data transferred out or requests of your choice.
-* Two line charts trend view of the top five countries/regions by data transferred out and requests of your choice.
-* A grid of the top countries/regions with corresponding data transferred out from AFD to clients, data transferred out % of all countries/regions, requests, request % among all countries/regions, cache hit ratio, 4XX response code and 5XX response code.
+* UserAgent
+* TotalRequests
+* Request%
+* DataTransferred(bytes)
+* DataTransferred%
-## Caching
+# [Security](#tab/security)
-Caching reports provides a chart view of cache hits/misses and cache hit ratio based on requests. These key metrics explain how CDN is caching contents since the fastest performance results from cache hits. You can optimize data delivery speeds by minimizing cache misses. This report includes:
+The *Security* report includes seven tables:
-* Cache hit and miss count trend, in line chart.
+* Time
+* Rule ID
+* Countries/regions
+* IP address
+* URL
+* Hostname
+* User agent
-* Cache hit ratio in line chart.
+All of the tables in the *Security* report include the following fields:
-Cache Hits/Misses describe the request number cache hits and cache misses for client requests.
+* BlockedRequests
+* AllowedRequests
+* LoggedRequests
+* RedirectedRequests
+* OWASPRuleRequests
+* CustomRuleRequests
+* BotRequests
-* Hits: the client requests that are served directly from Azure CDN edge servers. Refers to those requests whose values for CacheStatus in raw logs are HIT, PARTIAL_HIT, or REMOTE HIT.
+
-* Miss: the client requests that are served by Azure CDN edge servers fetching contents from origin. Refers to those requests whose values for the field CacheStatus in raw logs are MISS.
+## Key metrics included in all reports
-**Cache hit ratio** describes the percentage of cached requests that are served from edge directly. The formula of cache hit ratio is: `(PARTIAL_HIT +REMOTE_HIT+HIT/ (HIT + MISS + PARTIAL_HIT + REMOTE_HIT)*100%`.
+The following metrics are used within the reports.
-This report takes caching scenarios into consideration and requests that met the following requirements are taken into calculation.
+| Metric | Description |
+|||
+| Data Transferred | Shows data transferred from Azure Front Door edge PoPs to client for the selected time frame, client locations, domains, and protocols. |
+| Peak Bandwidth | Peak bandwidth usage in bits per seconds from Azure Front Door edge PoPs to clients for the selected time frame, client locations, domains, and protocols. |
+| Total Requests | The number of requests that Azure Front Door edge PoPs responded to clients for the selected time frame, client locations, domains, and protocols. |
+| Cache Hit Ratio | The percentage of all the cacheable requests for which Azure Front Door served the contents from its edge caches for the selected time frame, client locations, domains, and protocols. |
+| 5XX Error Rate | The percentage of requests for which the HTTP status code to client was a 5XX for the selected time frame, client locations, domains, and protocols. |
+| Total Latency | Average latency of all the requests for the selected time frame, client locations, domains, and protocols. The latency for each request is measured as the total time of when the client request gets received by Azure Front Door until the last response byte sent from Azure Front Door to client. |
-* The requested content was cached on a Front Door PoP.
+## Traffic by domain report
-* Partial cached contents for object chunking.
+The **traffic by domain** report provides a grid view of all the domains under this Azure Front Door profile.
-It excludes all of the following cases:
-* Requests that are denied because of Rules Set.
+In this report you can view:
-* Requests that contain matching Rules Set that has been set to disabled cache.
+* Request counts
+* Data transferred out from Azure Front Door to client
+* Requests with status code (3XX, 4XX and 5XX) of each domain
-* Requests that are blocked by WAF.
+Domains include endpoint domains and custom domains.
-* Origin response headers indicate that they shouldn't be cached. For example, Cache-Control: private, Cache-Control: no-cache, or Pragma: no-cache headers will prevent an asset from being cached.
+You can go to other tabs to investigate further or view access log for more information if you find the metrics below your expectation.
+## Usage report
-## Top URLs
+The **usage report** shows the trends of traffic and response status code by various dimensions.
-Top URLs allow you to view the amount of traffic incurred over a particular endpoint or custom domain. You'll see data for the most requested 50 assets during any period in the past 90 days. Popular URLs will be displayed with the following values. User can sort URLs by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected. URL refers to the value of RequestUri in access log.
+The dimensions included in the usage report are:
-* URL, refers to the full path of the requested asset in the format of `http(s)://contoso.com/https://docsupdatetracker.net/index.html/images/example.jpg`.
-* Request counts.
-* Request % of the total requests served by Azure Front Door.
-* Data transferred.
-* Data transferred %.
-* Cache Hit Ratio %
-* Requests with response code as 4XX
-* Requests with response code as 5XX
+* Data transferred from edge to client and from origin to edge, in a line chart.
+* Data transferred from edge to client by protocol, in a line chart.
+* Number of requests from edge to clients, in a line chart.
+* Number of requests from edge to clients by protocol (HTTP and HTTPS), in a line chart.
+* Bandwidth from edge to client, in a line chart.
+* Total latency, which measures the total time from the client request received by Azure Front Door until the last response byte sent from Azure Front Door to the client, in a line chart.
+* Number of requests from edge to clients by HTTP status code, in a line chart. Every request generates an HTTP status code. HTTP status code appears as the HTTPStatusCode in the raw access log. The status code describes how the Azure Front Door edge PoP handled the request. For example, a 2XX status code indicates that the request was successfully served to a client. While a 4XX status code indicates that an error occurred.
+* Number of requests from the edge to clients by HTTP status code, in a line chart. The percentage of requests by HTTP status code is shown in a grid.
-> [!NOTE]
-> Top URLs may change over time and to get an accurate list of the top 50 URLs, Azure Front Door counts all your URL requests by hour and keep the running total over the course of a day. The URLs at the bottom of the 500 URLs may rise onto or drop off the list over the day, so the total number of these URLs are approximations.
->
-> The top 50 URLs may rise and fall in the list, but they rarely disappear from the list, so the numbers for top URLs are usually reliable. When a URL drops off the list and rise up again over a day, the number of request during the period when they are missing from the list is estimated based on the request number of the URL that appear in that period.
->
-> The same logic applies to Top User Agent.
+## Traffic by location report
-## Top Referrers
+The **traffic by location** report displays:
-Top Referrers allow customers to view the top 50 referrer that originated the most requests to the contents on a particular endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer may come from a search engine or other websites. If a user types a URL (for example, http(s)://contoso.com/https://docsupdatetracker.net/index.html) directly into the address line of a browser, the referrer for the requested is "Empty". Top referrers report includes the following values. You can sort by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected.
+* The top 50 countries/regions of visitors that access your assets the most.
+* A breakdown of metrics by countries/regions and gives you an overall view of countries/regions where the most traffic gets generated.
+* The countries/regions that have higher cache hit ratios, and higher 4XX/5XX error code rates.
-* Referrer, the value of Referrer in raw logs
-* Request counts
-* Request % of total requests served by Azure CDN in the selected time period.
-* Data transferred
-* Data transferred %
-* Cache Hit Ratio %
-* Requests with response code as 4XX
-* Requests with response code as 5XX
+The following items are included in the reports:
-## Top User Agent
+* A world map view of the top 50 countries/regions by data transferred out or requests of your choice.
+* Two line charts showing a trend view of the top five countries/regions by data transferred out and requests of your choice.
+* A grid of the top countries/regions with corresponding data transferred out from Azure Front Door to clients, the percentage of data transferred out, the number of requests, the percentage of requests by the country/region, cache hit ratio, 4XX response code counts, and 5XX response code counts.
-This report allows you to have graphical and statistics view of the top 50 user agents that were used to request content. For example,
-* Mozilla/5.0 (Windows NT 10.0; WOW64)
-* AppleWebKit/537.36 (KHTML, like Gecko)
-* Chrome/86.0.4240.75
-* Safari/537.36.
+## Caching report
-A grid displays the request counts, request %, data transferred and data transferred, cache Hit Ratio %, requests with response code as 4XX and requests with response code as 5XX. User Agent refers to the value of UserAgent in access logs.
+The **caching report** provides a chart view of cache hits and misses, and the cache hit ratio, based on requests. Understanding how Azure Front Door caches your content helps you to improve your application's performance because cache hits give you the fastest performance. You can optimize data delivery speeds by minimizing cache misses.
-## Security Report
-This report allows you to have graphical and statistics view of WAF patterns by different dimensions.
+The caching report includes:
-| Dimensions | Description |
-|||
-| Overview metrics- Matched WAF rules | Requests that match custom WAF rules, managed WAF rules and bot manager. |
-| Overview metrics- Blocked Requests | The percentage of requests that are blocked by WAF rules among all the requests that matched WAF rules. |
-| Overview metrics- Matched Managed Rules | Four line-charts trend for requests that are Block, Log, Allow and Redirect. |
-| Overview metrics- Matched Custom Rule | Requests that match custom WAF rules. |
-| Overview metrics- Matched Bot Rule | Requests that match Bot Manager. |
-| WAF request trend by action | Four line-charts trend for requests that are Block, Log, Allow and Redirect. |
-| Events by Rule Type | Doughnut chart of the WAF requests distribution by Rule Type, e.g. Bot, custom rules and managed rules. |
-| Events by Rule Group | Doughnut chart of the WAF requests distribution by Rule Group. |
-| Requests by actions | A table of requests by actions, in descending order. |
-| Requests by top Rule IDs | A table of requests by top 50 rule IDs, in descending order. |
-| Requests by top countries/regions | A table of requests by top 50 countries/regions, in descending order. |
-| Requests by top client IPs | A table of requests by top 50 IPs, in descending order. |
-| Requests by top Request URL | A table of requests by top 50 URLs, in descending order. |
-| Request by top Hostnames | A table of requests by top 50 hostname, in descending order. |
-| Requests by top user agents | A table of requests by top 50 user agents, in descending order. |
+* Cache hit and miss count trend, in a line chart.
+* Cache hit ratio, in a line chart.
-## CSV format
+Cache hits/misses describe the request number cache hits and cache misses for client requests.
-You can download CSV files for different tabs in reports. This section describes the values in each CSV file.
+* Hits: the client requests that are served directly from Azure Front Door edge PoPs. Refers to those requests whose values for CacheStatus in the raw access logs are *HIT*, *PARTIAL_HIT*, or *REMOTE_HIT*.
+* Miss: the client requests that are served by Azure Front Door edge POPs fetching contents from origin. Refers to those requests whose values for the field CacheStatus in the raw access raw logs are *MISS*.
-### General information about the CSV report
+**Cache hit ratio** describes the percentage of cached requests that are served from edge directly. The formula of the cache hit ratio is: `(PARTIAL_HIT +REMOTE_HIT+HIT/ (HIT + MISS + PARTIAL_HIT + REMOTE_HIT)*100%`.
-Every CSV report includes some general information and the information is available in all CSV files. with variables based on the report you download.
+Requests that meet the following requirements are included in the calculation:
+* The requested content was cached on an Azure Front Door PoP.
+* Partial cached contents for [object chunking](../front-door-caching.md#delivery-of-large-files).
-| Value | Description |
-|||
-| Report | The name of the report. |
-| Domains | The list of the endpoints or custom domains for the report. |
-| StartDateUTC | The start of the date range for which you generated the report, in Coordinated Universal Time (UTC) |
-| EndDateUTC | The end of the date range for which you generated the report, in Coordinated Universal Time (UTC) |
-| GeneratedTimeUTC | The date and time when you generated the report, in Coordinated Universal Time (UTC) |
-| Location | The list of the countries/regions where the client requests originated. The value is ALL by default. Not applicable to Security report. |
-| Protocol | The protocol of the request, HTTP, or HTTPs. Not applicable to Top URL and Traffic by User Agent in Reports and Security report. |
-| Aggregation | The granularity of data aggregation in each row, every 5 minutes, every hour, and every day. Not applicable to Traffic by Domain, Top URL, and Traffic by User Agent in Reports and Security report. |
+It excludes all of the following cases:
-### Data in Traffic by Domain
+* Requests that are denied because of a Rule Set.
+* Requests that contain matching Rules Set, which has been set to disable the cache.
+* Requests that are blocked by the Azure Front Door WAF.
+* Requests when the origin response headers indicate that they shouldn't be cached. For example, requests with `Cache-Control: private`, `Cache-Control: no-cache`, or `Pragma: no-cache` headers prevent the response from being cached.
-* Domain
-* Total Request
-* Cache Hit Ratio
-* 3XX Requests
-* 4XX Requests
-* 5XX Requests
-* ByteTransferredFromEdgeToClient
+## Top URL report
-### Data in Traffic by Location
+The **top URL report** allow you to view the amount of traffic incurred through a particular endpoint or custom domain. You'll see data for the most requested 50 assets during any period in the past 90 days.
-* Location
-* TotalRequests
-* Request%
-* BytesTransferredFromEdgeToClient
-### Data in Usage
+Popular URLs will be displayed with the following values:
-There are three reports in this CSV file. One for HTTP protocol, one for HTTPS protocol and one for HTTP Status Code.
+* URL, which refers to the full path of the requested asset in the format of `http(s)://contoso.com/https://docsupdatetracker.net/index.html/images/example.jpg`. URL refers to the value of the RequestUri field in the raw access log.
+* Request counts.
+* Request counts as a percentage of the total requests served by Azure Front Door.
+* Data transferred.
+* Data transferred percentage.
+* Cache hit ratio percentage.
+* Requests with response codes of 4XX.
+* Requests with response codes of 5XX.
-Reports for HTTP and HTTPs share the same data set.
+User can sort URLs by request count, request count percentage, data transferred, and data transferred percentage. All the metrics are aggregated by hour and might vary based on the timeframe selected.
-* Time
-* Protocol
-* DataTransferred(bytes)
-* TotalRequest
-* bpsFromEdgeToClient
-* 2XXRequest
-* 3XXRequest
-* 4XXRequest
-* 5XXRequest
+> [!NOTE]
+> Top URLs might change over time. To get an accurate list of the top 50 URLs, Azure Front Door counts all your URL requests by hour and keep the running total over the course of a day. The URLs at the bottom of the 50 URLs may rise onto or drop off the list over the day, so the total number of these URLs are approximations.
+>
+> The top 50 URLs may rise and fall in the list, but they rarely disappear from the list, so the numbers for top URLs are usually reliable. When a URL drops off the list and rise up again over a day, the number of request during the period when they are missing from the list is estimated based on the request number of the URL that appear in that period.
-Report for HTTP Status Code.
+## Top referrer report
-* Time
-* DataTransferred(bytes)
-* TotalRequest
-* bpsFromEdgeToClient
-* 2XXRequest
-* 3XXRequest
-* 4XXRequest
-* 5XXRequest
+The **top referrer** report shows you the top 50 referrers to a particular Azure Front Door endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer may come from a search engine or other websites. If a user types a URL (for example, `https://contoso.com/https://docsupdatetracker.net/index.html`) directly into the address bar of a browser, the referrer for the requested is *Empty*.
-### Data in Caching
-* Time
-* CacheHitRatio
-* HitRequests
-* MissRequests
+The top referrer report includes the following values.
-### Data in Top URL
+* Referrer, which is the value of the Referrer field in the raw access log.
+* Request counts.
+* Request count as a percentage of total requests served by Azure Front Door in the selected time period.
+* Data transferred.
+* Data transferred percentage.
+* Cache hit ratio percentage.
+* Requests with response code as 4XX.
+* Requests with response code as 5XX.
-* URL
-* TotalRequests
-* Request%
-* DataTransferred(bytes)
-* DataTransferred%
+You can sort by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected.
-### Data in User Agent
+## Top user agent report
-* UserAgent
-* TotalRequests
-* Request%
-* DataTransferred(bytes)
-* DataTransferred%
+The **top user agent** report shows graphical and statistics views of the top 50 user agents that were used to request content. The following list shows example user agents:
+* Mozilla/5.0 (Windows NT 10.0; WOW64)
+* AppleWebKit/537.36 (KHTML, like Gecko)
+* Chrome/86.0.4240.75
+* Safari/537.36.
-### Security Report
+A grid displays the request counts, request %, data transferred and data transferred, cache Hit Ratio %, requests with response code as 4XX and requests with response code as 5XX. User Agent refers to the value of UserAgent in access logs.
-There are seven tables all with the same fields below.
+> [!NOTE]
+> Top user agents might change over time. To get an accurate list of the top 50 user agents, Azure Front Door counts all your user agent requests by hour and keep the running total over the course of a day. The user agents at the bottom of the 50 user agents may rise onto or drop off the list over the day, so the total number of these user agents are approximations.
+>
+> The top 50 user agents may rise and fall in the list, but they rarely disappear from the list, so the numbers for top user agents are usually reliable. When a user agent drops off the list and rise up again over a day, the number of request during the period when they are missing from the list is estimated based on the request number of the user agents that appear in that period.
-* BlockedRequests
-* AllowedRequests
-* LoggedRequests
-* RedirectedRequests
-* OWASPRuleRequests
-* CustomRuleRequests
-* BotRequests
+## Security report
-The seven tables are for time, rule ID, countries/regions, IP address, URL, hostname, user agent.
+The **security report** provides graphical and statistics views of WAF activity.
+
+| Dimensions | Description |
+|||
+| Overview metrics - Matched WAF rules | Requests that match custom WAF rules, managed WAF rules and bot protection rules. |
+| Overview metrics - Blocked Requests | The percentage of requests that are blocked by WAF rules among all the requests that matched WAF rules. |
+| Overview metrics - Matched Managed Rules | Requests that match managed WAF rules. |
+| Overview metrics - Matched Custom Rule | Requests that match custom WAF rules. |
+| Overview metrics - Matched Bot Rule | Requests that match bot protection rules. |
+| WAF request trend by action | Four line-charts trend for requests by action. Actions are *Block*, *Log*, *Allow*, and *Redirect*. |
+| Events by Rule Type | Doughnut chart of the WAF requests distribution by rule type. Rule types include bot protection rules, custom rules, and managed rules. |
+| Events by Rule Group | Doughnut chart of the WAF requests distribution by rule group. |
+| Requests by actions | A table of requests by actions, in descending order. |
+| Requests by top Rule IDs | A table of requests by top 50 rule IDs, in descending order. |
+| Requests by top countries/regions | A table of requests by top 50 countries/regions, in descending order. |
+| Requests by top client IPs | A table of requests by top 50 IPs, in descending order. |
+| Requests by top Request URL | A table of requests by top 50 URLs, in descending order. |
+| Request by top Hostnames | A table of requests by top 50 hostname, in descending order. |
+| Requests by top user agents | A table of requests by top 50 user agents, in descending order. |
## Next steps
frontdoor Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/web-application-firewall.md
Azure Web Application Firewall (WAF) on Azure Front Door provides centralized pr
## Policy settings
-A Web Application Firewall (WAF) policy allows you to control access to your web applications by using a set of custom and managed rules. You can change the state of the policy or configure a specific mode type for the policy. Depending on policy level settings you can choose to either actively inspect incoming requests, monitor only, or to monitor and take actions against requests that match a rule. For more information, see [WAF policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md).
+A Web Application Firewall (WAF) policy allows you to control access to your web applications by using a set of custom and managed rules. You can change the state of the policy or configure a specific mode type for the policy. Depending on policy level settings you can choose to either actively inspect incoming requests, monitor only, or to monitor and take actions against requests that match a rule. You can also configure the WAF to only detect threats without blocking them, which is useful when you first enable the WAF. After evaluating how the WAF works with your application, you can reconfigure the WAF settings and enable the WAF in prevention mode. For more information, see [WAF policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md).
## Managed rules
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md
You can build a flexible structure of management groups and subscriptions to org
into a hierarchy for unified policy and access management. The following diagram shows an example of creating a hierarchy for governance using management groups. Diagram of a root management group holding both management groups and subscriptions. Some child management groups hold management groups, some hold subscriptions, and some hold both. One of the examples in the sample hierarchy is four levels of management groups with the child level being all subscriptions. :::image-end::: You can create a hierarchy that applies a policy, for example, which limits VM locations to the
-West US region in the management group called "Production". This policy will inherit onto all the Enterprise
+West US region in the management group called "Corp". This policy will inherit onto all the Enterprise
Agreement (EA) subscriptions that are descendants of that management group and will apply to all VMs under those subscriptions. This security policy cannot be altered by the resource or subscription owner allowing for improved governance.
when trying to separate the assignment from its definition.
For example, let's look at a small section of a hierarchy for a visual.
- The diagram focuses on the root management group with child I T and Marketing management groups. The I T management group has a single child management group named Production while the Marketing management group has two Free Trial child subscriptions.
+ The diagram focuses on the root management group with child Landing zones and Sandbox management groups. The Landing zones management group has two child management groups named Corp and Online while the Sandbox management group has two child subscriptions.
:::image-end:::
-Let's say there's a custom role defined on the Marketing management group. That custom role is then
-assigned on the two free trial subscriptions.
+Let's say there's a custom role defined on the Sandbox management group. That custom role is then
+assigned on the two Sandbox subscriptions.
-If we try to move one of those subscriptions to be a child of the Production management group, this
-move would break the path from subscription role assignment to the Marketing management group role
+If we try to move one of those subscriptions to be a child of the Corp management group, this
+move would break the path from subscription role assignment to the Sandbox management group role
definition. In this scenario, you'll receive an error saying the move isn't allowed since it will break this relationship.
There are a couple different options to fix this scenario:
MG. - Add the subscription to the role definition's assignable scope. - Change the assignable scope within the role definition. In the above example, you can update the
- assignable scopes from Marketing to the root management group so that the definition can be reached by
+ assignable scopes from Sandbox to the root management group so that the definition can be reached by
both branches of the hierarchy. - Create another custom role that is defined in the other branch. This new role requires the role assignment to be changed on the subscription also.
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
if ($InitiativeRoleDefinitionIds.Count -gt 0) {
The new managed identity must complete replication through Azure Active Directory before it can be granted the needed roles. Once replication is complete, the roles specified in the policy definition's **roleDefinitionIds** should be granted to the managed identity.
-Access the roles specified in the policy definition using the [az policy definition show](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command, then iterate over each **roleDefinitionId** to create the role assignment using the [az role assignment create](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command.
+Access the roles specified in the policy definition using the [az policy definition show](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command, then iterate over each **roleDefinitionId** to create the role assignment using the [az role assignment create](/cli/azure/role/assignment?view=azure-cli-latest#az-role-assignment-create&preserve-view=true) command.
healthcare-apis Understand Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md
Previously updated : 02/14/2023 Last updated : 02/22/2023
The MedTech service device message data processing follows these steps and in th
:::image type="content" source="media/understand-service/understand-device-message-flow.png" alt-text="Screenshot of a device message as it processed by the MedTech service." lightbox="media/understand-service/understand-device-message-flow.png"::: ## Ingest
-Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub (`device message event hub`) and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device messages are processed.
+Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device messages are processed.
The device message event hub uses the MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for secure access to the device message event hub.
At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, alon
If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [Resolution Type](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to `Lookup`, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to `Create`, the MedTech service creates minimal Device and Patient resources in the FHIR service. > [!NOTE]
-> The `Resolution Type` can also be adjusted post deployment of the MedTech service in the event that a different type is later desired.
+> The `Resolution Type` can also be adjusted post deployment of the MedTech service if a different `Resolution Type` is later required.
-The MedTech service buffers the FHIR Observations resources created during the transformation stage and provides near real-time processing. However, it can potentially take up to five minutes for FHIR Observation resources to be persisted in the FHIR service.
+The MedTech service provides near real-time processing and will also attempt to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after ~five minutes. This means that when there's fewer than 300 normalized messages to be processed, there may be a delay of ~five minutes before FHIR Observations are created or updated in the FHIR service.
+
+> [!NOTE]
+> When multiple device messages contain data for the same FHIR Observation, have the same timestamp, and are sent within the same device message batch (for example, within the ~five minute window or in groups of 300 normalized messages), only the data corresponding to the latest device message for that FHIR Observation is persisted.
+>
+> For example:
+>
+> Device message 1:
+> ```json
+> {   
+> "patientid": "testpatient1",   
+> "deviceid": "testdevice1",
+> "systolic": "129",   
+> "diastolic": "65",   
+> "measurementdatetime": "2022-02-15T04:00:00.000Z"
+> } 
+> ```
+>
+> Device message 2:
+> ```json
+> {   
+> "patientid": "testpatient1",   
+> "deviceid": "testdevice1",   
+> "systolic": "113",   
+> "diastolic": "58",   
+> "measurementdatetime": "2022-02-15T04:00:00.000Z"
+> }
+> ```
+>
+> Assuming these device messages were ingested within the same ~five minute window or in the same group of 300 normalized messages, and since the `measurementdatetime` is the same for both device messages (indicating these contain data for the same FHIR Observation), only device message 2 is persisted to represent the latest/most recent data.
## Persist Persist is the final stage where the FHIR Observation resources from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation resource is new, it's created in the FHIR service. If the FHIR Observation resource already existed, it gets updated in the FHIR service.
import-export Storage Import Export Data From Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-from-blobs.md
Previously updated : 03/14/2022 Last updated : 02/13/2023
You must:
## Step 1: Create an export job
-# [Portal (Preview)](#tab/azure-portal-preview)
+# [Portal](#tab/azure-portal-preview)
-Perform the following steps to order an import job in Azure Import/Export via the Preview portal. The Azure Import/Export service in preview will create a job of the type "Data Box."
+Perform the following steps to order an import job in Azure Import/Export. The Azure Import/Export service creates a job of the type "Data Box."
1. Use your Microsoft Azure credentials to sign in at this URL: [https://portal.azure.com](https://portal.azure.com). 1. Select **+ Create a resource** and search for *Azure Data Box*. Select **Azure Data Box**.
Perform the following steps to order an import job in Azure Import/Export via th
1. Select the **Destination country/region** for the job. 1. Then select **Apply**.
- [![Screenshot of Get Started options for a new export order in Azure Import/Export's Preview portal. The Export From Azure transfer type and the Apply button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png)](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png#lightbox)
+ [![Screenshot of Get Started options for a new export order in Azure Import/Export's portal. The Export From Azure transfer type and the Apply button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png)](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png#lightbox)
1. Choose the **Select** button for **Import/Export Job**.
Perform the following steps to order an import job in Azure Import/Export via th
- Choose to export **All objects** in the storage account.
- ![Screenshot of the Job Details tab for a new export job in the Azure Import Export Jobs Preview portal. Export All is highlighted beside Blobs To Export.](./media/storage-import-export-data-from-blobs/import-export-order-preview-06-a-export-job.png)
+ ![Screenshot of the Job Details tab for a new export job in the Azure Import Export Jobs portal. Export All is highlighted beside Blobs To Export.](./media/storage-import-export-data-from-blobs/import-export-order-preview-06-a-export-job.png)
- Choose **Selected containers and blobs**, and specify containers and blobs to export. You can use more than one of the selection methods. Selecting an **Add** option opens a panel on the right where you can add your selection strings.
Perform the following steps to order an import job in Azure Import/Export via th
|**Add blobs**|Specify individual blobs to export.<br>Select **Add blobs**. Then specify the relative path to the blob, beginning with the container name. Use *$root* to specify the root container.<br>You must provide the blob paths in valid format, as shown in this screenshot, to avoid errors during processing. For more information, see [Examples of valid blob paths](storage-import-export-determine-drives-for-export.md#examples-of-valid-blob-paths).| |**Add prefixes**|Use a prefix to select a set of similarly named containers or similarly named blobs in a container. The prefix may be the prefix of the container name, the complete container name, or a complete container name followed by the prefix of the blob name. |
- :::image type="complex" source="./media/storage-import-export-data-from-blobs/import-export-order-preview-06-b-export-job.png" alt-text="Screenshot showing selected containers and blobs for a new Azure Import/Export export job in the Preview portal.":::
+ :::image type="complex" source="./media/storage-import-export-data-from-blobs/import-export-order-preview-06-b-export-job.png" alt-text="Screenshot showing selected containers and blobs for a new Azure Import/Export export job in the portal.":::
<Blob selections include a container, a blob, and blob prefixes that work like wildcards. The Add Prefixes pane on the right is used to add prefixes that select blobs based on common text in the blob path or name.> :::image-end:::
- - Choose **Export from blob list file (XML format)**, and select an XML file that contains a list of paths and prefixes for the blobs to be exported from the storage account. You must construct the XML file and store it in a container for the storage account. The file cannot be empty.
+ - Choose **Export from blob list file (XML format)**, and select an XML file that contains a list of paths and prefixes for the blobs to be exported from the storage account. You must construct the XML file and store it in a container for the storage account. The file can't be empty.
> [!IMPORTANT] > If you use an XML file to select the blobs to export, make sure that the XML contains valid paths and/or prefixes. If the file is invalid or no data matches the paths specified, the order terminates with partial data or no data exported.
Perform the following steps to order an import job in Azure Import/Export via th
1. In **Return shipping**: 1. Select a shipping carrier from the drop-down list for **Carrier**. The location of the Microsoft datacenter for the selected region determines which carriers are available.
- 1. Enter a **Carrier account number**. The account number for an valid carrier account is required.
+ 1. Enter a **Carrier account number**. The account number for a valid carrier account is required.
1. In the **Return address** area, use **+ Add Address** to add the address to ship to. ![Screenshot of the Return Shipping tab for an import job in Azure Data Box. The Return Shipping tab and the Plus Add Address button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-07-export-job.png) On the **Add Address** blade, you can add an address or use an existing one. When you finish entering address information, select **Add shipping address**.
- ![Screenshot showing an address on the Add Address blade for an import job in Azure Import Export Preview portal. The Add Shipping Address button is highlighted.](../../includes/media/storage-import-export-preview-import-steps/import-export-order-preview-08.png)
+ ![Screenshot showing an address on the Add Address blade for an import job in Azure Import Export portal. The Add Shipping Address button is highlighted.](../../includes/media/storage-import-export-preview-import-steps/import-export-order-preview-08.png)
1. In the **Notification** area, enter email addresses for the people you want to notify of the job's progress.
Perform the following steps to order an import job in Azure Import/Export via th
1. Review the job information. Make a note of the job name and the Azure datacenter shipping address to ship disks back to. This information is used later on the shipping label. 1. Select **Create**.
- ![Screenshot showing the Review Plus Create tab for an Azure Import/Export job in the Preview portal. The validation status, Terms, and Create button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-10-export-job.png)
+ ![Screenshot showing the Review Plus Create tab for an Azure Import/Export job in the portal. The validation status, Terms, and Create button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-10-export-job.png)
1. After the job is created, you'll see the following message.
- ![Screenshot of the status message for a completed order for an Azure Import Export job in the Preview portal. The status and the Go To Resource button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-11-export-job.png)
+ ![Screenshot of the status message for a completed order for an Azure Import Export job in the portal. The status and the Go To Resource button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-11-export-job.png)
You can select **Go to resource** to open the **Overview** of the job.
- [![Screenshot showing the Overview pane for an Azure Import Export job in Created state in the Preview portal.](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png)](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png#lightbox)
--
-# [Portal (Classic)](#tab/azure-portal-classic)
-
-Perform the following steps to create an export job in the Azure portal using the classic Azure Import/Export service.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Search for **import/export jobs**.
-
- ![Screenshot of the Search box at the top of the Azure Portal home page. A search key for the Import Export Jobs Service is entered in the Search box.](../../includes/media/storage-import-export-classic-import-steps/import-to-blob-1.png)
-
-3. Select **+ Create**.
-
- ![Screenshot of the command menu at the top of the Azure Import Export Jobs home page in the Azure portal. The Plus Create command is highlighted.](../../includes/media/storage-import-export-classic-import-steps/import-to-blob-2.png)
-
-4. In **Basics**:
-
- 1. Select a subscription.
- 1. Select a resource group, or select **Create new** and create a new one.
- 1. Enter a descriptive name for the import job. Use the name to track the progress of your jobs.
- * The name may contain only lowercase letters, numbers, and hyphens.
- * The name must start with a letter, and may not contain spaces.
-
- 1. Select **Export from Azure**.
- 1. Select a **Source Azure region**.
-
- If the new import/export experience is available in the selected region, you'll see a note inviting you to try the new experience. Select **Try now**, and follow the steps on the **Portal (Preview)** tab of this section to try the new experience with this order.
-
- ![Screenshot of the Basics tab for an Azure Import Export job. Export From Azure is selected. The Try Now link for the new import/export experience is highlighted.](./media/storage-import-export-data-from-blobs/export-from-blob-3.png)
-
- Select **Next: Job Details >** to proceed.
-
-5. In **Job details**:
-
- 1. Select the Azure region where your data currently is.
- 1. Select the storage account from which you want to export data. Use a storage account close to your location.
-
- The drop-off location is automatically populated based on the region of the storage account selected.
-
- 1. Specify the blob data to export from your storage account to your blank drive or drives. Choose one of the three following methods.
-
- - Choose to **Export all** blob data in the storage account.
-
- ![Screenshot of the Job Details tab for a new export job in Azure Import Export Jobs. Export All is highlighted beside Blobs To Export.](./media/storage-import-export-data-from-blobs/export-from-blob-4.png)
-
- - Choose **Selected containers and blobs**, and specify containers and blobs to export. You can use more than one of the selection methods. Selecting an **Add** option opens a panel on the right where you can add your selection strings.
-
- |Option|Description|
- ||--|
- |**Add containers**|Export all blobs in a container.<br>Select **Add containers**, and enter each container name.|
- |**Add blobs**|Specify individual blobs to export.<br>Select **Add blobs**. Then specify the relative path to the blob, beginning with the container name. Use *$root* to specify the root container.<br>You must provide the blob paths in valid format to avoid errors during processing, as shown in this screenshot. For more information, see [Examples of valid blob paths](storage-import-export-determine-drives-for-export.md#examples-of-valid-blob-paths).|
- |**Add prefixes**|Use a prefix to select a set of similarly named containers or similarly named blobs in a container. The prefix may be the prefix of the container name, the complete container name, or a complete container name followed by the prefix of the blob name. |
-
- :::image type="complex" source="./media/storage-import-export-data-from-blobs/export-from-blob-5.png" alt-text="Screenshot showing selected containers and blobs for a new Azure Import/Export export job.":::
- <Blob selections include a container, a blob, and blob prefixes that work like wildcards. The Add Prefixes pane on the right is used to add prefixes that select blobs based on common text in the blob path or name.>
-
- - Choose **Export from blob list file (XML format)**, and select an XML file that contains a list of paths and prefixes for the blobs to be exported from the storage account. You must construct the XML file and store it in a container for the storage account. The file cannot be empty.
-
- > [!IMPORTANT]
- > If you use an XML file to select the blobs to export, make sure that the XML contains valid paths and/or prefixes. If the file is invalid or no data matches the paths specified, the order terminates with partial data or no data exported.
-
- To see how to add an XML file to a container, see [Export order using XML file](../databox/data-box-deploy-export-ordered.md#export-order-using-xml-file).
-
- ![Screenshot of Job Details for Azure Import/Export job that selects blobs using a blob list file. Blob list file option and selected file are highlighted.](./media/storage-import-export-data-from-blobs/export-from-blob-6.png)
-
- > [!NOTE]
- > If a blob to be exported is in use during data copy, the Azure Import/Export service takes a snapshot of the blob and copies the snapshot.
-
- Select **Next: Shipping >** to proceed.
-
-6. [!INCLUDE [storage-import-export-shipping-step.md](../../includes/storage-import-export-shipping-step.md)]
-
-7. In **Review + create**:
-
- 1. Review the details of the job.
- 1. Make a note of the job name and provided Azure datacenter shipping address for shipping disks to Azure.
-
- > [!NOTE]
- > Always send the disks to the datacenter noted in the Azure portal. If the disks are shipped to the wrong datacenter, the job will not be processed.
-
- 1. Review the **Terms** for your order for privacy and source data deletion. If you agree to the terms, select the check box beneath the terms. Validation of the order begins.
-
- ![Screenshot showing the Review Plus Create tab for an Azure Import/Export job. The validation status, Terms, and Create button are highlighted.](./media/storage-import-export-data-from-blobs/export-from-blob-6-a.png)
-
- 8. After validation passes, select **Create**.
+ [![Screenshot showing the Overview pane for an Azure Import Export job in Created state in the portal.](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png)](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png#lightbox)
# [Azure CLI](#tab/azure-cli)
Install-Module -Name Az.ImportExport
## Step 2: Ship the drives
-If you do not know the number of drives you need, see [Determine how many drives you need](storage-import-export-determine-drives-for-export.md#determine-how-many-drives-you-need). If you know the number of drives, proceed to ship the drives.
+If you don't know the number of drives you need, see [Determine how many drives you need](storage-import-export-determine-drives-for-export.md#determine-how-many-drives-you-need). If you know the number of drives, proceed to ship the drives.
[!INCLUDE [storage-import-export-ship-drives](../../includes/storage-import-export-ship-drives.md)]
If you do not know the number of drives you need, see [Determine how many drives
When the dashboard reports the job is complete, the disks are shipped to you and the tracking number for the shipment is available in the portal.
-1. After you receive the drives with exported data, you need to get the BitLocker keys to unlock the drives. Go to the export job in the Azure portal. Click **Import/Export** tab.
-2. Select and click your export job from the list. Go to **Encryption** and copy the keys.
+1. After you receive the drives with exported data, you need to get the BitLocker keys to unlock the drives. Go to the export job in the Azure portal. Select **Import/Export** tab.
+2. Select your export job from the list. Go to **Encryption** and copy the keys.
![Screenshot of the Encryption blade for an export job in Azure Import Export Jobs. The Encryption menu item and Copy button for the key are highlighted.](./media/storage-import-export-data-from-blobs/export-from-blob-7.png)
Use the following command to unlock the drive:
`WAImportExport Unlock /bk:<BitLocker key (base 64 string) copied from Encryption blade in Azure portal> /driveLetter:<Drive letter>`
-Here is an example of the sample input.
+Here's an example of the sample input.
`WAImportExport.exe Unlock /bk:CAAcwBoAG8AdQBsAGQAIABiAGUAIABoAGkAZABkAGUAbgA= /driveLetter:e`
import-export Storage Import Export Data To Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-blobs.md
Previously updated : 03/14/2022 Last updated : 02/01/2023
This step generates a journal file. The journal file stores basic information su
Perform the following steps to prepare the drives. 1. Connect your disk drives to the Windows system via SATA connectors.
-2. Create a single NTFS volume on each drive. Assign a drive letter to the volume. Do not use mountpoints.
+2. Create a single NTFS volume on each drive. Assign a drive letter to the volume. Don't use mountpoints.
3. Enable BitLocker encryption on the NTFS volume. If using a Windows Server system, use the instructions in [How to enable BitLocker on Windows Server 2012 R2](https://thesolving.com/storage/how-to-enable-bitlocker-on-windows-server-2012-r2/). 4. Copy data to encrypted volume. Use drag and drop or Robocopy or any such copy tool. A journal (*.jrn*) file is created in the same folder where you run the tool. If the drive is locked and you need to unlock the drive, the steps to unlock may be different depending on your use case.
- * If you have added data to a pre-encrypted drive (WAImportExport tool was not used for encryption), use the BitLocker key (a numerical password that you specify) in the popup to unlock the drive.
+ * If you have added data to a pre-encrypted drive (WAImportExport tool wasn't used for encryption), use the BitLocker key (a numerical password that you specify) in the popup to unlock the drive.
* If you have added data to a drive that was encrypted by WAImportExport tool, use the following command to unlock the drive:
Perform the following steps to prepare the drives.
|/bk: |The BitLocker key for the drive. Its numerical password from output of `manage-bde -protectors -get D:` | |/srcdir: |The drive letter of the disk to be shipped followed by `:\`. For example, `D:\`. | |/dstdir: |The name of the destination container in Azure Storage. |
- |/blobtype: |This option specifies the type of blobs you want to import the data to. For block blobs, the blob type is `BlockBlob` and for page blobs, it is `PageBlob`. |
- |/skipwrite: | Specifies that there is no new data required to be copied and existing data on the disk is to be prepared. |
- |/enablecontentmd5: |The option when enabled, ensures that MD5 is computed and set as `Content-md5` property on each blob. Use this option only if you want to use the `Content-md5` field after the data is uploaded to Azure. <br> This option does not affect the data integrity check (that occurs by default). The setting does increase the time taken to upload data to cloud. |
+ |/blobtype: |This option specifies the type of blobs you want to import the data to. For block blobs, the blob type is `BlockBlob` and for page blobs, it's `PageBlob`. |
+ |/skipwrite: | Specifies that there's no new data required to be copied and existing data on the disk is to be prepared. |
+ |/enablecontentmd5: |The option when enabled, ensures that MD5 is computed and set as `Content-md5` property on each blob. Use this option only if you want to use the `Content-md5` field after the data is uploaded to Azure. <br> This option doesn't affect the data integrity check (that occurs by default). The setting does increase the time taken to upload data to cloud. |
> [!NOTE] > - If you import a blob with the same name as an existing blob in the destination container, the imported blob will overwrite the existing blob. In earlier tool versions (before 1.5.0.300), the imported blob was renamed by default, and a \Disposition parameter let you specify whether to rename, overwrite, or disregard the blob in the import.
Perform the following steps to prepare the drives.
A journal file with the provided name is created for every run of the command line.
- Together with the journal file, a `<Journal file name>_DriveInfo_<Drive serial ID>.xml` file is also created in the same folder where the tool resides. The .xml file is used in place of the journal file when creating a job if the journal file is too big.
+ Together with the journal file, a `<Journal file name>_DriveInfo_<Drive serial ID>.xml` file is also created in the same folder where the tool resides. The .xml file is used in place of the journal file when creating a job if the journal file is too large.
> [!IMPORTANT] > * Do not modify the journal files or the data on the disk drives, and don't reformat any disks, after completing disk preparation.
Perform the following steps to prepare the drives.
## Step 2: Create an import job
-# [Portal (Preview)](#tab/azure-portal-preview)
+# [Portal](#tab/azure-portal-preview)
[!INCLUDE [storage-import-export-preview-import-steps.md](../../includes/storage-import-export-preview-import-steps.md)]
-# [Portal (Classic)](#tab/azure-portal)
--- # [Azure CLI](#tab/azure-cli) Use the following steps to create an import job in the Azure CLI.
import-export Storage Import Export Data To Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-files.md
Previously updated : 03/14/2022 Last updated : 02/13/2023
This article provides step-by-step instructions on how to use the Azure Import/Export service to securely import large amounts of data into Azure Files. To import data, the service requires you to ship supported disk drives containing your data to an Azure datacenter.
-The Import/Export service supports only import of Azure Files into Azure Storage. Exporting Azure Files is not supported.
+The Import/Export service supports only import of Azure Files into Azure Storage. Exporting Azure Files isn't supported.
In this tutorial, you learn how to:
Do the following steps to prepare the drives.
2. Create a single NTFS volume on each drive. Assign a drive letter to the volume. Do not use mountpoints. 3. Modify the *dataset.csv* file in the root folder where the tool is. Depending on whether you want to import a file or folder or both, add entries in the *dataset.csv* file similar to the following examples.
- - **To import a file**: In the following example, the data to copy is on the F: drive. Your file *MyFile1.txt* is copied to the root of the *MyAzureFileshare1*. If the *MyAzureFileshare1* does not exist, it's created in the Azure Storage account. Folder structure is maintained.
+ - **To import a file**: In the following example, the data to copy is on the F: drive. Your file *MyFile1.txt* is copied to the root of the *MyAzureFileshare1*. If the *MyAzureFileshare1* doesn't exist, it's created in the Azure Storage account. Folder structure is maintained.
``` BasePath,DstItemPathOrPrefix,ItemType
Do the following steps to prepare the drives.
``` > [!NOTE]
- > The /Disposition parameter, which let you choose what to do when you import a file that already exists in earlier versions of the tool, is not supported in Azure Import/Export version 2.2.0.300. In the earlier tool versions, an imported file with the same name as an existing file was renamed by default.
+ > The /Disposition parameter, which let you choose what to do when you import a file that already exists in earlier versions of the tool, isn't supported in Azure Import/Export version 2.2.0.300. In the earlier tool versions, an imported file with the same name as an existing file was renamed by default.
Multiple entries can be made in the same file corresponding to folders or files that are imported.
Do the following steps to prepare the drives.
This example assumes that two disks are attached and basic NTFS volumes G:\ and H:\ are created. H:\is not encrypted while G: is already encrypted. The tool formats and encrypts the disk that hosts H:\ only (and not G:\).
- - **For a disk that is not encrypted**: Specify *Encrypt* to enable BitLocker encryption on the disk.
+ - **For a disk that isn't encrypted**: Specify *Encrypt* to enable BitLocker encryption on the disk.
``` DriveLetter,FormatOption,SilentOrPromptOnFormat,Encryption,ExistingBitLockerKey
For additional samples, go to [Samples for journal files](#samples-for-journal-f
## Step 2: Create an import job
-### [Portal (Preview)](#tab/azure-portal-preview)
+### [Portal](#tab/azure-portal-preview)
[!INCLUDE [storage-import-export-preview-import-steps.md](../../includes/storage-import-export-preview-import-steps.md)]
-### [Portal (Classic)](#tab/azure-portal-classic)
--- ### [Azure CLI](#tab/azure-cli) Use the following steps to create an import job in the Azure CLI.
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
For Event Hubs and Service Bus, IoT Central exports a new message quickly after
For Blob storage, messages are batched and exported once per minute. The exported files use the same format as the message files exported by [IoT Hub message routing](../../iot-hub/tutorial-routing.md) to blob storage. > [!NOTE]
-> For Blob storage, ensure that your devices are sending messages that have `contentType: application/JSON` and `contentEncoding:utf-8` (or `utf-16`, `utf-32`). See the [IoT Hub documentation](../../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-message-body) for an example.
+> For Blob storage, ensure that your devices are sending messages that have `contentType: application/JSON` and `contentEncoding:utf-8` (or `utf-16`, `utf-32`). See the [IoT Hub documentation](../../iot-hub/iot-hub-devguide-routing-query-syntax.md#query-based-on-message-body) for an example.
The device that sent the telemetry is represented by the device ID (see the following sections). To get the names of the devices, export device data and correlate each message by using the **connectionDeviceId** that matches the **deviceId** of the device message.
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
In the rest of this section, you'll use your Windows command prompt.
4. Enter the following command to build and run the X.509 device provisioning sample (replace `<id-scope>` with the ID Scope that you copied in step 2. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands. ```cmd
- run -- -s <id-scope> -c <your-certificate-folder>\certs\device-01-full-chain.cert.pfx -p 1234
+ dotnet run -- -s <id-scope> -c <your-certificate-folder>\certs\device-01-full-chain.cert.pfx -p 1234
``` The device connects to DPS and is assigned to an IoT hub. Then, the device sends a telemetry message to the IoT hub. You should see output similar to the following:
In the rest of this section, you'll use your Windows command prompt.
5. To register your second device, rerun the sample using its full chain certificate. ```cmd
- run -- -s <id-scope> -c <your-certificate-folder>\certs\device-02-full-chain.cert.pfx -p 1234
+ dotnet run -- -s <id-scope> -c <your-certificate-folder>\certs\device-02-full-chain.cert.pfx -p 1234
``` ::: zone-end
iot-edge How To Configure Multiple Nics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-multiple-nics.md
For more information about networking concepts and configurations, see [Azure Io
- Virtual switch different from the default one used during EFLOW installation. For more information on creating a virtual switch, see [Create a virtual switch for Azure IoT Edge for Linux on Windows](./how-to-create-virtual-switch.md). ## Create and assign a virtual switch
-During the EFLOW VM deployment, the VM had a switched assigned for all the communications between the Windows host OS and the virtual machine. This will always be the switch used for VM lifecycle management communications, and it's not possible to delete it.
+During the EFLOW VM deployment, the VM had a switch assigned for all communications between the Windows host OS and the virtual machine. You always use the switch for VM lifecycle management communications, and it's not possible to delete it.
-The following steps in this section show how to assign a network interface to the EFLOW virtual machine. Ensure that the virtual switch being used and the networking configuration aligns with your networking environment. For more information about networking concepts like type of switches, DHCP and DNS, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md).
+The following steps in this section show how to assign a network interface to the EFLOW virtual machine. Ensure that the virtual switch and the networking configuration align with your networking environment. For more information about networking concepts like type of switches, DHCP and DNS, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md).
1. Open an elevated _PowerShell_ session by starting with **Run as Administrator**.
-1. Check the virtual switch to be assigned to the EFLOW VM is available.
+1. Check that the virtual switch you assign to the EFLOW VM is available.
```powershell Get-VMSwitch -Name "{switchName}" -SwitchType {switchType} ```
The following steps in this section show how to assign a network interface to th
``` :::image type="content" source="./medilet-add-eflow-network.png" alt-text="EFLOW attach virtual switch":::
-1. Check that the virtual switch was correctly assigned to the EFLOW VM.
+1. Check that you correctly assigned the virtual switch to the EFLOW VM.
```powershell Get-EflowNetwork -vSwitchName "{switchName}" ```
For more information about attaching a virtual switch to the EFLOW VM, see [Powe
## Create and assign a network endpoint
-Once the virtual switch was successfully assigned to the EFLOW VM, you need to create a networking endpoint assigned to virtual switch to finalize the network interface creation. If you're using Static IP, ensure to use the appropriate parameters: _ip4Address_, _ip4GatewayAddress_ and _ip4PrefixLength_.
+Once you successfully assign the virtual switch to the EFLOW VM, create a networking endpoint assigned to virtual switch to finalize the network interface creation. If you're using Static IP, ensure to use the appropriate parameters: _ip4Address_, _ip4GatewayAddress_ and _ip4PrefixLength_.
1. Open an elevated _PowerShell_ session by starting with **Run as Administrator**. 1. Create the EFLOW VM network endpoint
- - If you're using DHCP, no Static IP parameters are needed.
+ - If you're using DHCP, you don't need Static IP parameters.
```powershell Add-EflowVmEndpoint -vSwitchName "{switchName}" -vEndpointName "{EndpointName}" ```
Once the virtual switch was successfully assigned to the EFLOW VM, you need to c
:::image type="content" source="./medilet-add-eflow-endpoint.png" alt-text="EFLOW attach network endpoint":::
-1. Check that the network endpoint was correctly created and assigned to the EFLOW VM. You should see the two network interfaces assigned to the virtual machine.
+1. Check that you correctly created the network endpoint and assigned it to the EFLOW VM. You should see two network interfaces assigned to the virtual machine.
```powershell Get-EflowVmEndpoint ```
For more information about creating and attaching a network endpoint to the EFLO
## Check the VM network configurations
-The final step is to make sure the networking configurations were applied correctly and the EFLOW VM has the new network interface configured. The new interface will show up as _"eth1"_ if it's the first extra interface added to the VM.
+The final step is to make sure the networking configurations applied correctly and the EFLOW VM has the new network interface configured. The new interface shows up as _"eth1"_ if it's the first extra interface added to the VM.
1. Open PowerShell in an elevated session. You can do so by opening the **Start** pane on Windows and typing in "PowerShell". Right-click the **Windows PowerShell** app that shows up and select **Run as administrator**.
The final step is to make sure the networking configurations were applied correc
ifconfig ```
- The default interface **eth0** is the one used for all the VM management. You should see another interface, like **eth1**, which is the new interface that was assigned to the VM. Following the examples above, if you previously assigned a new endpoint with the static IP 192.168.0.103 you should see the interface **eth1** with the _inet addr: 192.168.0.103_.
+ The default interface **eth0** is the one used for all the VM management. You should see another interface, like **eth1**, which is the new interface you assigned to the VM. Following the examples, if you previously assigned a new endpoint with the static IP 192.168.0.103 you should see the interface **eth1** with the _inet addr: 192.168.0.103_.
- ![EFLOW VM network interfaces](./medilet-eflow-ifconfig.png)
+ :::image type="content" source="./medilet-eflow-ifconfig.png" alt-text="Screenshot of EFLOW virtual machine network interfaces.":::
## Next steps
-Follow the steps in [How to configure networking for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md) to make sure all the networking configurations were applied correctly.
+Follow the steps in [How to configure networking for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md) to make sure you applied all the networking configurations correctly.
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
When you use the **Set modules** wizard to create deployments for IoT Edge devic
To configure the IoT Edge agent and IoT Edge hub modules, select **Runtime Settings** on the first step of the wizard.
-![Configure advanced Edge Runtime settings](./media/how-to-configure-proxy-support/configure-runtime.png)
Add the **https_proxy** environment variable to both the IoT Edge agent and IoT Edge hub module definitions. If you included the **UpstreamProtocol** environment variable in the config file on your IoT Edge device, add that to the IoT Edge agent module definition too.
-![Set https_proxy environment variable](./media/how-to-configure-proxy-support/edgehub-environmentvar.png)
All other modules that you add to a deployment manifest follow the same pattern. Select **Apply** to save your changes.
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
When a device connects to an IoT Edge gateway, the downstream device is the clie
When you use a self-signed root CA certificate for an IoT Edge gateway, it needs to be installed on or provided to all the downstream devices attempting to connect to the gateway.
-![Gateway certificate setup](./media/how-to-create-transparent-gateway/gateway-setup.png)
To learn more about IoT Edge certificates and some production implications, see [IoT Edge certificate usage details](iot-edge-certs.md).
This command tests connections over MQTTS (port 8883). If you're using a differe
The output of this command may be long, including information about all the certificates in the chain. If your connection is successful, you'll see a line like `Verification: OK` or `Verify return code: 0 (ok)`.
-![Verify gateway connection](./media/how-to-connect-downstream-device/verification-ok.png)
## Troubleshoot the gateway connection
iot-edge How To Continuous Integration Continuous Deployment Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md
Azure Pipelines includes a built-in Azure IoT Edge task that helps you adopt DevOps with your Azure IoT Edge applications. This article demonstrates how to use the continuous integration and continuous deployment features of Azure Pipelines to build, test, and deploy applications quickly and efficiently to your Azure IoT Edge using the classic editor. Alternatively, you can [use YAML](how-to-continuous-integration-continuous-deployment.md).
-![Diagram - CI and CD branches for development and production](./media/how-to-continuous-integration-continuous-deployment-classic/model.png)
In this article, you learn how to use the built-in [Azure IoT Edge tasks](/azure/devops/pipelines/tasks/build/azure-iot-edge) for Azure Pipelines to create build and release pipelines for your IoT Edge solution. Each Azure IoT Edge task added to your pipeline implements one of the following four actions:
In this section, you create a new build pipeline. You configure the pipeline to
1. Sign in to your Azure DevOps organization (`https://dev.azure.com/{your organization}`) and open the project that contains your IoT Edge solution repository.
- ![Open your DevOps project](./media/how-to-continuous-integration-continuous-deployment-classic/initial-project.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/initial-project.png" alt-text="Screenshot that shows how to open your DevOps project.":::
2. From the left pane menu in your project, select **Pipelines**. Select **Create Pipeline** at the center of the page. Or, if you already have build pipelines, select the **New pipeline** button in the top right.
- ![Create a new build pipeline](./media/how-to-continuous-integration-continuous-deployment-classic/add-new-pipeline.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/add-new-pipeline.png" alt-text="Screenshot that shows how to create a new build pipeline.":::
3. At the bottom of the **Where is your code?** page, select **Use the classic editor**. If you wish to use YAML to create your project's build pipelines, see the [YAML guide](how-to-continuous-integration-continuous-deployment.md).
- ![Select Use the classic editor](./media/how-to-continuous-integration-continuous-deployment-classic/create-without-yaml.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/create-without-yaml.png" alt-text="Screenshot that shows how to use the classic editor.":::
4. Follow the prompts to create your pipeline. 1. Provide the source information for your new build pipeline. Select **Azure Repos Git** as the source, then select the project, repository, and branch where your IoT Edge solution code is located. Then, select **Continue**.
- ![Select your pipeline source](./media/how-to-continuous-integration-continuous-deployment-classic/pipeline-source.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/pipeline-source.png" alt-text="Screenshot showing how to select your pipeline source.":::
2. Select **Empty job** instead of a template.
- ![Start with an empty job for your build pipeline](./media/how-to-continuous-integration-continuous-deployment-classic/start-with-empty-build-job.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/start-with-empty-build-job.png" alt-text="Screenshot showing how to start with an empty job for your build pipeline.":::
5. Once your pipeline is created, you are taken to the pipeline editor. Here, you can change the pipeline's name, agent pool, and agent specification.
In this section, you create a new build pipeline. You configure the pipeline to
11. Open the **Triggers** tab and check the box to **Enable continuous integration**. Make sure the branch containing your code is included.
- ![Turn on continuous integration trigger](./media/how-to-continuous-integration-continuous-deployment-classic/configure-trigger.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/configure-trigger.png" alt-text="Screenshot showing how to turn on continuous integration trigger.":::
12. Select **Save** from the **Save & queue** dropdown.
iot-edge How To Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment.md
You can easily adopt DevOps with your Azure IoT Edge applications with the built-in Azure IoT Edge tasks in Azure Pipelines. This article demonstrates how you can use Azure Pipelines to build, test, and deploy Azure IoT Edge modules using YAML. Alternatively, you can [use the classic editor](how-to-continuous-integration-continuous-deployment-classic.md).
-![Diagram - CI and CD branches for development and production](./media/how-to-continuous-integration-continuous-deployment/model.png)
In this article, you learn how to use the built-in [Azure IoT Edge tasks](/azure/devops/pipelines/tasks/build/azure-iot-edge) for Azure Pipelines to create build and release pipelines for your IoT Edge solution. Each Azure IoT Edge task added to your pipeline implements one of the following four actions:
In this section, you create a new build pipeline. You configure the pipeline to
1. Sign in to your Azure DevOps organization (`https://dev.azure.com/{your organization}`) and open the project that contains your IoT Edge solution repository.
- ![Open your DevOps project](./media/how-to-continuous-integration-continuous-deployment/initial-project.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/initial-project.png" alt-text="Screenshot showing how to open your DevOps project.":::
2. From the left pane menu in your project, select **Pipelines**. Select **Create Pipeline** at the center of the page. Or, if you already have build pipelines, select the **New pipeline** button in the top right.
- ![Create a new build pipeline using the New pipeline button](./media/how-to-continuous-integration-continuous-deployment/add-new-pipeline.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/add-new-pipeline.png" alt-text="Screenshot showing how to create a new build pipeline using the New pipeline button .":::
3. On the **Where is your code?** page, select **Azure Repos Git `YAML`**. If you wish to use the classic editor to create your project's build pipelines, see the [classic editor guide](how-to-continuous-integration-continuous-deployment-classic.md). 4. Select the repository you are creating a pipeline for.
- ![Select the repository for your build pipeline](./media/how-to-continuous-integration-continuous-deployment/select-repository.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/select-repository.png" alt-text="Screenshot showing how to select the repository for your build pipeline.":::
5. On the **Configure your pipeline** page, select **Starter pipeline**. If you have a preexisting Azure Pipelines YAML file you wish to use to create this pipeline, you can select **Existing Azure Pipelines YAML file** and provide the branch and path in the repository to the file.
In this section, you create a new build pipeline. You configure the pipeline to
Select **Show assistant** to open the **Tasks** palette.
- ![Select Show assistant to open Tasks palette](./media/how-to-continuous-integration-continuous-deployment/show-assistant.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/show-assistant.png" alt-text="Screenshot that shows how to select Show assistant to open Tasks palette.":::
7. To add a task, place your cursor at the end of the YAML or wherever you want the instructions for your task to be added. Search for and select **Azure IoT Edge**. Fill out the task's parameters as follows. Then, select **Add**.
In this section, you create a new build pipeline. You configure the pipeline to
For more information about this task and its parameters, see [Azure IoT Edge task](/azure/devops/pipelines/tasks/build/azure-iot-edge).
- ![Use Tasks palette to add tasks to your pipeline](./media/how-to-continuous-integration-continuous-deployment/add-build-task.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/add-build-task.png" alt-text="Screenshot of the Use Tasks palette and how to add tasks to your pipeline.":::
>[!TIP] > After each task is added, the editor will automatically highlight the added lines. To prevent accidental overwriting, deselect the lines and provide a new space for your next task before adding additional tasks.
In this section, you create a new build pipeline. You configure the pipeline to
10. The trigger for continuous integration is enabled by default for your YAML pipeline. If you wish to edit these settings, select your pipeline and click **Edit** in the top right. Select **More actions** next to the **Run** button in the top right and go to **Triggers**. **Continuous integration** shows as enabled under your pipeline's name. If you wish to see the details for the trigger, check the **Override the YAML continuous integration trigger from here** box.
- ![To review your pipeline's trigger settings, see Triggers under More actions](./media/how-to-continuous-integration-continuous-deployment/check-trigger-settings.png)
+ :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/check-trigger-settings.png" alt-text="Screenshot showing how to review your pipeline's trigger settings from the Triggers menu under More actions.":::
Continue to the next section to build the release pipeline.
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
After you add a module to a deployment, you can select its name to open the **Up
If you're creating a layered deployment, you may be configuring a module that exists in other deployments targeting the same devices. To update the module twin without overwriting other versions, open the **Module Twin Settings** tab. Create a new **Module Twin Property** with a unique name for a subsection within the module twin's desired properties, for example `properties.desired.settings`. If you define properties within just the `properties.desired` field, it will overwrite the desired properties for the module defined in any lower priority deployments.
-![Set module twin property for layered deployment](./media/how-to-deploy-monitor/module-twin-property.png)
For more information about module twin configuration in layered deployments, see [Layered deployment](module-deployment-monitoring.md#layered-deployment).
When you modify a deployment, the changes immediately replicate to all targeted
1. Select the **Metrics** tab and click the **Edit Metrics** button. Add or modify custom metrics, using the example syntax as a guide. Select **Save**.
- ![Edit custom metrics in a deployment](./media/how-to-deploy-monitor/metric-list.png)
+ :::image type="content" source="./media/how-to-deploy-monitor/metric-list.png" alt-text="Screenshot showing how to edit custom metrics in a deployment.":::
1. Select the **Labels** tab and make any desired changes and select **Save**.
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
A deployment manifest is a JSON document that describes which modules to deploy,
- **IoT Edge Module Name**: `azureblobstorageoniotedge` - **Image URI**: `mcr.microsoft.com/azure-blob-storage:latest`
- ![Screenshot shows the Module Settings tab of the Add I o T Edge Module page.](./media/how-to-deploy-blob/addmodule-tab1.png)
+ :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab1.png" alt-text="Screenshot showing the Module Settings tab of the Add I o T Edge Module page. .":::
Don't select **Add** until you've specified values on the **Module Settings**, **Container Create Options**, and **Module Twin Settings** tabs as described in this procedure.
A deployment manifest is a JSON document that describes which modules to deploy,
3. Open the **Container Create Options** tab.
- ![Screenshot shows the Container Create Options tab of the Add I o T Edge Module page.](./media/how-to-deploy-blob/addmodule-tab3.png)
+ :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab3.png" alt-text="Screenshot showing the Container Create Options tab of the Add I o T Edge Module page..":::
Copy and paste the following JSON into the box, to provide storage account information and a mount for the storage on your device.
A deployment manifest is a JSON document that describes which modules to deploy,
5. On the **Module Twin Settings** tab, copy the following JSON and paste it into the box.
- ![Screenshot shows the Module Twin Settings tab of the Add I o T Edge Module page.](./media/how-to-deploy-blob/addmodule-tab4.png)
+ :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab4.png" alt-text="Screenshot showing the Module Twin Settings tab of the Add I o T Edge Module page.":::
Configure each property with an appropriate value, as indicated by the placeholders. If you are using the IoT Edge simulator, set the values to the related environment variables for these properties as described by [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties).
Azure IoT Edge provides templates in Visual Studio Code to help you develop edge
1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**.
- ![Run New IoT Edge Solution](./media/how-to-develop-csharp-module/new-solution.png)
+ :::image type="content" source="./media/how-to-develop-csharp-module/new-solution.png" alt-text="Screenshot showing how to run the New IoT Edge Solution.":::
Follow the prompts in the command palette to create your solution.
Azure IoT Edge provides templates in Visual Studio Code to help you develop edge
} ```
- ![Update module createOptions - Visual Studio Code](./media/how-to-deploy-blob/create-options.png)
+ :::image type="content" source="./media/how-to-deploy-blob/create-options.png" alt-text="Screenshot showing how to update module createOptions - Visual Studio Code .":::
1. Replace `<your storage account name>` with a name that you can remember. Account names should be 3 to 24 characters long, with lowercase letters and numbers. No spaces.
Azure IoT Edge provides templates in Visual Studio Code to help you develop edge
} ```
- ![set desired properties for azureblobstorageoniotedge - Visual Studio Code](./media/how-to-deploy-blob/devicetocloud-deviceautodelete.png)
+ :::image type="content" source="./media/how-to-deploy-blob/devicetocloud-deviceautodelete.png" alt-text="Screenshot showing how to set desired properties for azureblobstorageoniotedge in Visual Studio Code .":::
For information on configuring deviceToCloudUploadProperties and deviceAutoDeleteProperties after your module has been deployed, see [Edit the Module Twin](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Edit-Module-Twin). For more information about container create options, restart policy, and desired status, see [EdgeAgent desired properties](module-edgeagent-edgehub.md#edgeagent-desired-properties).
In addition, a blob storage module also requires the HTTPS_PROXY setting in the
1. Add `HTTPS_PROXY` for the **Name** and your proxy URL for the **Value**.
- ![Screenshot shows the Update I o T Edge Module pane where you can enter the specified values.](./media/how-to-deploy-blob/https-proxy-config.png)
+ :::image type="content" source="./media/how-to-deploy-blob/https-proxy-config.png" alt-text="Screenshot showing the Update I o T Edge Module pane where you can enter the specified values.":::
1. Click **Update**, then **Review + Create**.
In addition, a blob storage module also requires the HTTPS_PROXY setting in the
1. Verify the setting by selecting the module from the device details page, and on the lower part of the **IoT Edge Modules Details** page select the **Environment Variables** tab.
- ![Screenshot shows Environment Variables tab.](./media/how-to-deploy-blob/verify-proxy-config.png)
+ :::image type="content" source="./media/how-to-deploy-blob/verify-proxy-config.png" alt-text="Screenshot showing the Environment Variables tab.":::
## Next steps
iot-edge How To Deploy Modules Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-cli.md
[!INCLUDE [iot-edge-version-all-supported](includes/iot-edge-version-all-supported.md)]
-Once you create IoT Edge modules with your business logic, you want to deploy them to your devices to operate at the edge. If you have multiple modules that work together to collect and process data, you can deploy them all at once and declare the routing rules that connect them.
+Once you create Azure IoT Edge modules with your business logic, you want to deploy them to your devices to operate at the edge. If you have multiple modules that work together to collect and process data, you can deploy them all at once. You can also declare the routing rules that connect them.
-[Azure CLI](/cli/azure) is an open-source cross platform command-line tool for managing Azure resources such as IoT Edge. It enables you to manage Azure IoT Hub resources, device provisioning service instances, and linked-hubs out of the box. The new IoT extension enriches Azure CLI with features such as device management and full IoT Edge capability.
+[Azure CLI](/cli/azure) is an open-source cross platform, command-line tool for managing Azure resources such as IoT Edge. It enables you to manage Azure IoT Hub resources, device provisioning service instances, and linked-hubs out of the box. The new IoT extension enriches Azure CLI with features such as device management and full IoT Edge capability.
This article shows how to create a JSON deployment manifest, then use that file to push the deployment to an IoT Edge device. For information about creating a deployment that targets multiple devices based on their shared tags, see [Deploy and monitor IoT Edge modules at scale](how-to-deploy-cli-at-scale.md)
This article shows how to create a JSON deployment manifest, then use that file
If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md).
-* [Azure CLI](/cli/azure/install-azure-cli) in your environment. At a minimum, your Azure CLI version must be 2.0.70 or above. Use `az --version` to validate. This version supports az extension commands and introduces the Knack command framework.
+* [Azure CLI](/cli/azure/install-azure-cli) in your environment. At a minimum, your Azure CLI version must be 2.0.70 or higher. Use `az --version` to validate. This version supports az extension commands and introduces the Knack command framework.
* The [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension). ## Configure a deployment manifest A deployment manifest is a JSON document that describes which modules to deploy, how data flows between the modules, and desired properties of the module twins. For more information about how deployment manifests work and how to create them, see [Understand how IoT Edge modules can be used, configured, and reused](module-composition.md).
-To deploy modules using the Azure CLI, save the deployment manifest locally as a .json file. You will use the file path in the next section when you run the command to apply the configuration to your device.
+To deploy modules using the Azure CLI, save the deployment manifest locally as a .json file. You use the file path in the next section when you run the command to apply the configuration to your device.
Here's a basic deployment manifest with one module as an example:
Here's a basic deployment manifest with one module as an example:
You deploy modules to your device by applying the deployment manifest that you configured with the module information.
-Change directories into the folder where your deployment manifest is saved. If you used one of the Visual Studio Code IoT Edge templates, use the `deployment.json` file in the **config** folder of your solution directory and not the `deployment.template.json` file.
+Change directories into the folder where you saved your deployment manifest. If you used one of the Visual Studio Code IoT Edge templates, use the `deployment.json` file in the **config** folder of your solution directory and not the `deployment.template.json` file.
Use the following command to apply the configuration to an IoT Edge device:
Use the following command to apply the configuration to an IoT Edge device:
The device ID parameter is case-sensitive. The content parameter points to the deployment manifest file that you saved.
- ![az iot edge set-modules output](./media/how-to-deploy-cli/set-modules.png)
## View modules on your device
View the modules on your IoT Edge device:
The device ID parameter is case-sensitive.
- ![az iot hub module-identity list output](./media/how-to-deploy-cli/list-modules.png)
## Next steps
iot-edge How To Deploy Modules Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-portal.md
You can quickly deploy a module from the Azure Marketplace onto your device in y
1. On the upper bar, select **Set Modules**. 1. In the **IoT Edge Modules** section, click **Add**, and select **Marketplace Module** from the drop-down menu.
-![Add module in IoT Hub](./media/how-to-deploy-modules-portal/iothub-add-module.png)
Choose a module from the **IoT Edge Module Marketplace** page. The module you select is automatically configured for your subscription, resource group, and device. It then appears in your list of IoT Edge modules. Some modules may require additional configuration.
iot-edge How To Deploy Modules Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-vscode.md
You can use the Azure IoT extensions for Visual Studio Code to perform operation
1. At the bottom of the Explorer, expand the **Azure IoT Hub** section.
- ![Expand Azure IoT Hub section](./media/how-to-deploy-modules-vscode/azure-iot-hub-devices.png)
+ :::image type="content" source="./media/how-to-deploy-modules-vscode/azure-iot-hub-devices.png" alt-text="Screenshot showing the expanded Azure I o T Hub section.":::
1. Click on the **...** in the **Azure IoT Hub** section header. If you don't see the ellipsis, hover over the header.
You deploy modules to your device by applying the deployment manifest that you c
1. Navigate to the deployment manifest JSON file that you want to use, and click **Select Edge Deployment Manifest**.
- ![Select Edge Deployment Manifest](./media/how-to-deploy-modules-vscode/select-deployment-manifest.png)
+ :::image type="content" source="./media/how-to-deploy-modules-vscode/select-deployment-manifest.png" alt-text="Screenshot showing where to select the I o T Edge Deployment Manifest.":::
The results of your deployment are printed in the Visual Studio Code output. Successful deployments are applied within a few minutes if the target device is running and connected to the internet.
iot-edge How To Deploy Vscode At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-vscode-at-scale.md
After you have configured the deployment manifest and configured tags in the dev
1. Provide values as prompted, starting with the **deployment ID**.
- ![Specify a deployment ID](./media/how-to-deploy-monitor-vscode/create-deployment-at-scale.png)
+ :::image type="content" source="./media/how-to-deploy-monitor-vscode/create-deployment-at-scale.png" alt-text="Screenshot showing how to specify a deployment ID.":::
Specify values for these parameters:
iot-edge How To Edgeagent Direct Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-edgeagent-direct-method.md
az iot hub invoke-module-method --method-name 'ping' -n <hub name> -d <device na
In the Azure portal, invoke the method with the method name `ping` and an empty JSON payload `{}`.
-![Invoke direct method 'ping' in Azure portal](./media/how-to-edgeagent-direct-method/ping-direct-method.png)
## Restart module
In the Azure portal, invoke the method with the method name `RestartModule` and
} ```
-![Invoke direct method 'RestartModule' in Azure portal](./media/how-to-edgeagent-direct-method/restartmodule-direct-method.png)
## Diagnostic direct methods
iot-edge How To Install Iot Edge Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-kubernetes.md
IoT Edge can be installed on Kubernetes by using [KubeVirt](https://www.cncf.io/
## Architecture
-[![IoT Edge on Kubernetes with KubeVirt](./media/how-to-install-iot-edge-kubernetes/iotedge-kubevirt.png)](./media/how-to-install-iot-edge-kubernetes/iotedge-kubevirt.png#lightbox)
| Note | Description | |-|-|
iot-edge How To Install Iot Edge Ubuntuvm Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-ubuntuvm-bicep.md
You can't deploy a remote Bicep file. Save a copy of the [Bicep file](https://ra
The **DNS Name** can also be obtained from the **Overview** section of the newly deployed virtual machine within the Azure portal.
- > [!div class="mx-imgBorder"]
- > [![Screenshot showing the DNS name of the IoT Edge Virtual Machine.](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png)
+ :::image type="content" source="./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png" alt-text="Screenshot showing the DNS name of the I o T Edge virtual machine." lightbox="./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png":::
1. If you want to SSH into this VM after setup, use the associated **DNS Name** with the command: `ssh <adminUsername>@<DNS_Name>`
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
Using a self-signed certificate authority (CA) certificate as a root of trust wi
1. Apply the configuration. ```bash
- sudo iotege config apply
+ sudo iotedge config apply
``` ### Install root CA to OS certificate store
Server certificates may be issued off the Edge CA certificate or through a DPS-c
## Next steps
-Installing certificates on an IoT Edge device is a necessary step before deploying your solution in production. Learn more about how to [Prepare to deploy your IoT Edge solution in production](production-checklist.md).
+Installing certificates on an IoT Edge device is a necessary step before deploying your solution in production. Learn more about how to [Prepare to deploy your IoT Edge solution in production](production-checklist.md).
iot-edge How To Monitor Iot Edge Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-iot-edge-deployments.md
To view the details of a deployment and monitor the devices running it, use the
1. Select the deployment that you want to monitor. 1. On the **Deployment Details** page, scroll down to the bottom section and select the **Target Condition** tab. Select **View** to list the devices that match the target condition. You can change the condition and also the **Priority**. Select **Save** if you made changes.
- ![View targeted devices for a deployment](./media/how-to-monitor-iot-edge-deployments/target-devices.png)
+ :::image type="content" source="./media/how-to-monitor-iot-edge-deployments/target-devices.png" alt-text="Screenshot showing targeted devices for a deployment.":::
1. Select the **Metrics** tab. If you choose a metric from the **Select Metric** drop-down, a **View** button appears for you to display the results. You can also select **Edit Metrics** to adjust the criteria for any custom metrics that you have defined. Select **Save** if you made changes.
- ![View metrics for a deployment](./media/how-to-monitor-iot-edge-deployments/deployment-metrics-tab.png)
+ :::image type="content" source="./media/how-to-monitor-iot-edge-deployments/deployment-metrics-tab.png" alt-text="Screenshot showing the metrics for a deployment.":::
To make changes to your deployment, see [Modify a deployment](how-to-deploy-at-scale.md#modify-a-deployment).
iot-edge How To Monitor Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-module-twins.md
To view the JSON for the module twin:
1. Select the **Device ID** of the IoT Edge device with the modules you want to monitor. 1. Select the module name from the **Modules** tab and then select **Module Identity Twin** from the upper menu bar.
- ![Select a module twin to view in the Azure portal](./media/how-to-monitor-module-twins/select-module-twin.png)
+ :::image type="content" source="./media/how-to-monitor-module-twins/select-module-twin.png" alt-text="Screenshot showing how to select a module twin to view in the Azure portal .":::
If you see the message "A module identity doesn't exist for this module", this error indicates that the back-end solution is no longer available that originally created the identity.
To review and edit a module twin:
1. In the **Explorer**, expand the **Azure IoT Hub**, and then expand the device with the module you want to monitor. 1. Right-click the module and select **Edit Module Twin**. A temporary file of the module twin is downloaded to your computer and displayed in Visual Studio Code.
- ![Get a module twin to edit in Visual Studio Code](./media/how-to-monitor-module-twins/edit-module-twin-vscode.png)
+ :::image type="content" source="./media/how-to-monitor-module-twins/edit-module-twin-vscode.png" alt-text="Screenshot showing how to get a module twin to edit in Visual Studio Code .":::
If you make changes, select **Update Module Twin** above the code in the editor to save changes to your IoT hub.
- ![Update a module twin in Visual Studio Code](./media/how-to-monitor-module-twins/update-module-twin-vscode.png)
+ :::image type="content" source="./media/how-to-monitor-module-twins/update-module-twin-vscode.png" alt-text="Screenshot showing how to update a module twin in Visual Studio Code.":::
### Monitor module twins in Azure CLI
iot-edge How To Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-observability.md
In order to go beyond abstract considerations, we'll use a *real-life* scenario
### La Ni├▒a
-![Illustration of La Nina solution collecting surface temperature from sensors into Azure IoT Edge](media/how-to-observability/la-nina-high-level.png)
The La Ni├▒a service measures surface temperature in Pacific Ocean to predict La Ni├▒a winters. There is a number of buoys in the ocean with IoT Edge devices that send the surface temperature to Azure Cloud. The telemetry data with the temperature is pre-processed by a custom module on the IoT Edge device before sending it to the cloud. In the cloud, the data is processed by backend Azure Functions and saved to Azure Blob Storage. The clients of the service (ML inference workflows, decision making systems, various UIs, etc.) can pick up messages with temperature data from the Azure Blob Storage.
It's a common practice to measure service level indicators, like the ones we've
Let's clarify what components the La Ni├▒a service consists of:
-![Diagram of La Nina components including IoT Edge device and Azure Services](media/how-to-observability/la-nina-metrics.png)
There is an IoT Edge device with `Temperature Sensor` custom module (C#) that generates some temperature value and sends it upstream with a telemetry message. This message is routed to another custom module `Filter` (C#). This module checks the received temperature against a threshold window (0-100 degrees Celsius). If the temperature is within the window, the FilterModule sends the telemetry message to the cloud.
In this scenario, we have a fleet of 10 buoys. One of the buoys has been intenti
We're going to monitor Service Level Objectives (SLO) and corresponding Service Level Indicators (SLI) with Azure Monitor Workbooks. This scenario deployment includes the *La Nina SLO/SLI* workbook assigned to the IoT Hub.
-![Screenshot of IoT Hub monitoring showing the Workbooks | Gallery in the Azure portal](media/how-to-observability/dashboard-path.png)
To achieve the best user experience the workbooks are designed to follow the _glance_ -> _scan_ -> _commit_ concept:
To achieve the best user experience the workbooks are designed to follow the _gl
At this level, we can see the whole picture at a single glance. The data is aggregated and represented at the fleet level:
-![Screenshot of the monitoring summary report in the Azure portal showing an issue with device coverage and data freshness](media/how-to-observability/glance.png)
From what we can see, the service is not functioning according to the expectations. There is a violation of the *Data Freshness* SLO. Only 90% of the devices send the data frequently, and the service clients expect 95%. All SLO and threshold values are configurable on the workbook settings tab:
-![Screenshot of the workbook settings in the Azure portal](media/how-to-observability/workbook-settings.png)
#### Scan By clicking on the violated SLO, we can drill down to the *scan* level and see how the devices contribute to the aggregated SLI value.
-![Screenshot of message frequency by device](media/how-to-observability/scan.png)
There is a single device (out of 10) that sends the telemetry data to the cloud "rarely". In our SLO definition, we've stated that "frequently" means at least 10 times per minute. The frequency of this device is way below that threshold.
There is a single device (out of 10) that sends the telemetry data to the cloud
By clicking on the problematic device, we're drilling down to the *commit* level. This is a curated workbook *Device Details* that comes out of the box with IoT Hub monitoring offering. The *La Nina SLO/SLI* workbook reuses it to bring the details of the specific device performance.
-![Screenshot of messaging telemetry for a device in the Azure portal](media/how-to-observability/commit.png)
## Troubleshooting
The *commit* level workbook gives a lot of detailed information about the device
In this scenario, all parameters of the trouble device look normal and it's not clear why the device sends messages less frequent than expected. This fact is also confirmed by the *messaging* tab of the device-level workbook:
-![Screenshot of sample messages in the Azure portal](media/how-to-observability/messages.png)
The `Temperature Sensor` (tempSensor) module produced 120 telemetry messages, but only 49 of them went upstream to the cloud. The first step we want to do is to check the logs produced by the `Filter` module. Click the **Troubleshoot live!** button and select the `Filter` module.
-![Screenshot of the filter module log in the Azure portal](media/how-to-observability/basic-logs.png)
Analysis of the module logs doesn't discover the issue. The module receives messages, there are no errors. Everything looks good here.
There are two observability instruments serving the deep troubleshooting purpose
The La Ni├▒a service uses [OpenTelemetry](https://opentelemetry.io) to produce and collect traces and logs in Azure Monitor.
-![Diagram illustrating an IoT Edge device sending telemetry data to Azure Monitor](media/how-to-observability/la-nina-detailed.png)
IoT Edge modules `Temperature Sensor` and `Filter` export the logs and tracing data via OTLP (OpenTelemetry Protocol) to the [OpenTelemetryCollector](https://opentelemetry.io/docs/collector/) module, running on the same edge device. The `OpenTelemetryCollector` module, in its turn, exports logs and traces to Azure Monitor Application Insights service.
By default, IoT Edge modules on the devices of the La Ni├▒a service are configur
We've analyzed the `Information` level logs of the `Filter` module and realized that we need to dive deeper to locate the cause of the issue. We're going to update properties in the `Temperature Sensor` and `Filter` module twins and increase the `loggingLevel` to `Debug` and change the `traceSampleRatio` from `0` to `1`:
-![Screenshot of module troubleshooting showing updating FilterModule twin properties](media/how-to-observability/update-twin.png)
With that in place, we have to restart the `Temperature Sensor` and `Filter` modules:
-![Screenshot of module troubleshooting showing Restart FilterModule button](media/how-to-observability/restart-module.png)
In a few minutes, the traces and detailed logs will arrive to Azure Monitor from the trouble device. The entire end-to-end message flow from the sensor on the device to the storage in the cloud will be available for monitoring with *application map* in Application Insights:
-![Screenshot of application map in Application Insights](media/how-to-observability/application-map.png)
From this map we can drill down to the traces and we can see that some of them look normal and contain all the steps of the flow, and some of them, are very short, so nothing happens after the `Filter` module.
-![ Screenshot of monitoring traces](media/how-to-observability/traces.png)
Let's analyze one of those short traces and find out what was happening in the `Filter` module, and why it didn't send the message upstream to the cloud. Our logs are correlated with the traces, so we can query logs specifying the `TraceId` and `SpanId` to retrieve logs corresponding exactly to this execution instance of the `Filter` module:
-![Sample trace query filtering based on Trace ID and Span ID.](media/how-to-observability/logs.png)
The logs show that the module received a message with 70.465-degrees temperature. But the filtering threshold configured on this device is 30 to 70. So the message simply didn't pass the threshold. Apparently, this specific device was configured wrong. This is the cause of the issue we detected while monitoring the La Ni├▒a service performance with the workbook. Let's fix the `Filter` module configuration on this device by updating properties in the module twin. We also want to reduce back the `loggingLevel` to `Information` and `traceSampleRatio` to `0`:
-![Sample JSON showing the logging level and trace sample ratio values](media/how-to-observability/fix-issue.png)
Having done that, we need to restart the module. In a few minutes, the device reports new metric values to Azure Monitor. It reflects in the workbook charts:
-![Screenshot of Azure Monitor workbook chart](media/how-to-observability/fixed-workbook.png)
We see that the message frequency on the problematic device got back to normal. The overall SLO value will become green again, if nothing else happens, in the configured observation interval:
-![Screenshot of the monitoring summary report in the Azure portal](media/how-to-observability/green-workbook.png)
## Try the sample
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
In the Azure portal, invoke the method with the method name `GetModuleLogs` and
} ```
-![Invoke direct method 'GetModuleLogs' in Azure portal](./media/how-to-retrieve-iot-edge-logs/invoke-get-module-logs.png)
You can also pipe the CLI output to Linux utilities, like [gzip](https://en.wikipedia.org/wiki/Gzip), to process a compressed response. For example:
In the Azure portal, invoke the method with the method name `UploadModuleLogs` a
} ```
-![Invoke direct method 'UploadModuleLogs' in Azure portal](./media/how-to-retrieve-iot-edge-logs/invoke-upload-module-logs.png)
## Upload support bundle diagnostics
In the Azure portal, invoke the method with the method name `UploadSupportBundle
} ```
-![Invoke direct method 'UploadSupportBundle' in Azure portal](./media/how-to-retrieve-iot-edge-logs/invoke-upload-support-bundle.png)
## Get upload request status
In the Azure portal, invoke the method with the method name `GetTaskStatus` and
} ```
-![Invoke direct method 'GetTaskStatus' in Azure portal](./media/how-to-retrieve-iot-edge-logs/invoke-get-task-status.png)
## Next steps
iot-edge How To Share Windows Folder To Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-share-windows-folder-to-vm.md
If you don't have an EFLOW device ready, you should create one before continuing
The Azure IoT Edge for Linux on Windows file and folder sharing mechanism is implemented using [virtiofs](https://virtio-fs.gitlab.io/) technology. *Virtiofs* is a shared file system that lets virtual machines access a directory tree on the host OS. Unlike other approaches, it's designed to offer local file system semantics and performance. *Virtiofs* isn't a network file system repurposed for virtualization. It's designed to take advantage of the locality of virtual machines and the hypervisor. It takes advantage of the virtual machine's co-location with the hypervisor to avoid overhead associated with network file systems.
-![Windows folder shared with the EFLOW virtual machine using Virtio-FS technology](media/how-to-share-windows-folder-to-vm/folder-sharing-virtiofs.png)
Only Windows folders can be shared to the EFLOW Linux VM and not the other way. Also, for security reasons, when setting the folder sharing mechanism, the user must provide a _root folder_ and all the shared folders must be under that _root folder_.
The following steps provide example EFLOW PowerShell commands to share one or mo
1. Start by creating a new root shared folder. Go to **File Explorer** and choose a location for the *root folder* and create the folder.
- For example, create a *root folder* under _C:\Shared_ named **EFLOW-Shared**.
+ For example, create a *root folder* under _C:\Shared_ named **EFLOW-Shared**.
- ![Windows root folder](media/how-to-share-windows-folder-to-vm/root-folder.png)
+ :::image type="content" source="media/how-to-share-windows-folder-to-vm/root-folder.png" alt-text="Screenshot of the Windows root folder.":::
1. Create one or more *shared folders* to be shared with the EFLOW virtual machine. Shared folders should be created under the *root folder* from the previous step.
- For example, create two folders one named **Read-Access** and one named **Read-Write-Access**.
+ For example, create two folders one named **Read-Access** and one named **Read-Write-Access**.
- ![Windows shared folders](media/how-to-share-windows-folder-to-vm/shared-folders.png)
+ :::image type="content" source="media/how-to-share-windows-folder-to-vm/shared-folders.png" alt-text="Screenshot of Windows shared folders.":::
1. Within the _Read-Access_ shared folder, create a sample file that we'll later read inside the EFLOW virtual machine.
iot-edge How To Use Create Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-use-create-options.md
If you use the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemN
One tip for writing create options is to use the `docker inspect` command. As part of your development process, run the module locally using `docker run <container name>`. Once you have the module working the way you want it, run `docker inspect <container name>`. This command outputs the module details in JSON format. Find the parameters that you configured, and copy the JSON. For example:
-[![Results of docker inspect edgeHub](./media/how-to-use-create-options/docker-inspect-edgehub-inline-and-expanded.png)](./media/how-to-use-create-options/docker-inspect-edgehub-inline-and-expanded.png#lightbox)
## Common scenarios
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-visual-studio-develop-module.md
In our solution, we're going to build three projects. The main module that conta
1. Select **Add** to add your module to the project.
- ![Add Application and Module](./media/how-to-visual-studio-develop-csharp-module/add-module.png)
+ :::image type="content" source="./media/how-to-visual-studio-develop-csharp-module/add-module.png" alt-text="Screenshot of how to add Application and Module.":::
> [!NOTE] >If you have an existing IoT Edge project, you can change the repository URL by opening the **module.json** file. The repository URL is located in the *repository* property of the JSON file.
Now, you have an IoT Edge project and an IoT Edge module in your Visual Studio s
In your solution, there are two project level folders including a main project folder and a single module folder. For example, you may have a main project folder named *AzureIotEdgeApp1* and a module folder named *IotEdgeModule1*. The main project folder contains your deployment manifest.
-The module project folder contains a file for your module code named either `program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files included here provide the information needed to build your module as a Windows or Linux container.
+The module project folder contains a file for your module code named either `Program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files included here provide the information needed to build your module as a Windows or Linux container.
### Deployment manifest of your project
Typically, you'll want to test and debug each module before running it within an
1. Set a breakpoint to inspect the module.
- * If developing in C#, set a breakpoint in the `PipeMessage()` function in **Program.cs**.
+ * If developing in C#, set a breakpoint in the `PipeMessage()` function in **ModuleBackgroundService.cs**.
* If using C, set a breakpoint in the `InputQueue1Callback()` function in **main.c**. 1. Test the module by sending a message. When debugging a single module, the simulator listens on the default port 53000 for messages. To send a message to your module, run the following curl command from a command shell like **Git Bash** or **WSL Bash**.
After you're done developing a single module, you might want to run and debug an
1. Set a breakpoint to inspect the modules.
- * If developing in C#, set a breakpoint in the `PipeMessage()` function in **Program.cs**.
+ * If developing in C#, set a breakpoint in the `PipeMessage()` function in **ModuleBackgroundService.cs**.
* If using C, set a breakpoint in the `InputQueue1Callback()` function in **main.c**. 1. Create breakpoints in each module and then press **F5** to run and debug multiple modules simultaneously. You should see multiple .NET Core console app windows, with each window representing a different module.
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Use Visual Studio Code and the [Azure IoT Edge](https://marketplace.visualstudio
1. Select **View** > **Command Palette**. 1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge Solution**.
- ![Run New IoT Edge Solution](./media/how-to-develop-csharp-module/new-solution.png)
+ :::image type="content" source="./media/how-to-develop-csharp-module/new-solution.png" alt-text="Screenshot of how to run a new IoT Edge solution.":::
1. Browse to the folder where you want to create the new solution and then select **Select folder**. 1. Enter a name for your solution. 1. Select a module template for your preferred development language to be the first module in the solution. 1. Enter a name for your module. Choose a name that's unique within your container registry.
-1. Provide the name of the module's image repository. Visual Studio Code autopopulates the module name with **localhost:5000/<your module name\>**. Replace it with your own registry information. Use **localhost** if you use a local Docker registry for testing. If you use Azure Container Registry, then use sign in server from your registry's settings. The sign in server looks like **_\<registry name\>_.azurecr.io**. Only replace the **localhost:5000** part of the string so that the final result looks like **\<*registry name*\>.azurecr.io/_\<your module name\>_**.
+1. Provide the name of the module's image repository. Visual Studio Code autopopulates the module name with **localhost:5000/<your module name\>**. Replace it with your own registry information. Use **localhost** if you use a local Docker registry for testing. If you use Azure Container Registry, then use sign in server from your registry's settings. The sign-in server looks like **_\<registry name\>_.azurecr.io**. Only replace the **localhost:5000** part of the string so that the final result looks like **\<*registry name*\>.azurecr.io/_\<your module name\>_**.
- ![Provide Docker image repository](./media/how-to-develop-csharp-module/repository.png)
+ :::image type="content" source="./media/how-to-develop-csharp-module/repository.png" alt-text="Screenshot of how to provide a Docker image repository.":::
Visual Studio Code takes the information you provided, creates an IoT Edge solution, and then loads it in a new window. There are four items within the solution: - A **.vscode** folder contains debug configurations.-- A **modules** folder has subfolders for each module. Within the folder for each module, there's a file called **module.json** that controls how modules are built and deployed. This file would need to be modified to change the module deployment container registry from localhost to a remote registry. At this point, you only have one module. But you can add more if needed
+- A **modules** folder has subfolders for each module. Within the folder for each module, there's a file called **module.json** that controls how modules are built and deployed. You need to modify this file to change the module deployment container registry from a localhost to a remote registry. At this point, you only have one module. But you can add more if needed
- An **.env** file lists your environment variables. The environment variable for the container registry is *localhost* by default. If Azure Container Registry is your registry, set an Azure Container Registry username and password. For example, ```env
modules/*&lt;your module name&gt;*/**main.py**
-The sample modules are designed so that you can build the solution, push to your container registry, and deploy to a device. This process lets you start testing without modifying any code. The sample module takes input from a source (in this case, the *SimulatedTemperatureSensor* module that simulates data) and pipes it to IoT Hub.
+The sample modules allow you to build the solution, push to your container registry, and deploy to a device. This process lets you start testing without modifying any code. The sample module takes input from a source (in this case, the *SimulatedTemperatureSensor* module that simulates data) and pipes it to IoT Hub.
When you're ready to customize the template with your own code, use the [Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md) to build modules that address the key needs for IoT solutions such as security, device management, and reliability.
Debugging in attach mode isn't supported for C or Python.
# [C\# / Azure Functions / Node.js / Java](#tab/csharp+azfunctions+node+java)
-Your default solution contains two modules, one is a simulated temperature sensor module and the other is the pipe module. The simulated temperature sensor sends messages to the pipe module and then the messages are piped to the IoT Hub. In the module folder you created, there are several Docker files for different container types. Use any of the files that end with the extension **.debug** to build your module for testing.
+Your default solution contains two modules, one is a simulated temperature sensor module and the other is the pipe module. The simulated temperature sensor sends messages to the pipe module and then the messages go to the IoT Hub. In the module folder you created, there are several Docker files for different container types. Use any of the files that end with the extension **.debug** to build your module for testing.
Currently, debugging in attach mode is supported only as follows:
On your development machine, you can start an IoT Edge simulator instead of inst
1. In the **Explorer** tab on the left side, expand the **Azure IoT Hub** section. Right-click on your IoT Edge device ID, and then select **Setup IoT Edge Simulator** to start the simulator with the device connection string.
-1. You can see the IoT Edge Simulator has been successfully set up by reading the progress detail in the integrated terminal.
+1. You can see the successful set up of the IoT Edge Simulator by reading the progress detail in the integrated terminal.
### Build and run container for debugging and debug in attach mode
On your development machine, you can start an IoT Edge simulator instead of inst
1. In the Visual Studio Code Explorer view, right-click the `deployment.debug.template.json` file for your solution and then select **Build and Run IoT Edge solution in Simulator**. You can watch all the module container logs in the same window. You can also navigate to the Docker view to watch container status.
- ![Watch Variables](media/how-to-vs-code-develop-module/view-log.png)
+ :::image type="content" source="media/how-to-vs-code-develop-module/view-log.png" alt-text="Screenshot of the Watch Variables.":::
1. Navigate to the Visual Studio Code Debug view and select the debug configuration file for your module. The debug option name should be similar to ***&lt;your module name&gt;* Remote Debug**
In each module folder, there are several Docker files for different container ty
When you debug modules using this method, your modules are running on top of the IoT Edge runtime. The IoT Edge device and your Visual Studio Code can be on the same machine, or more typically, Visual Studio Code is on the development machine and the IoT Edge runtime and modules are running on another physical machine. In order to debug from Visual Studio Code, you must: - Set up your IoT Edge device, build your IoT Edge modules with the **.debug** Dockerfile, and then deploy to the IoT Edge device.-- Update the `launch.json` so that Visual Studio Code can attach to the process in the container on the remote machine. This file is located in the `.vscode` folder in your workspace and updates each time you add a new module that supports debugging.
+- Update the `launch.json` so that Visual Studio Code can attach to the process in the container on the remote machine. You can find this file in the `.vscode` folder in your workspace and updates each time you add a new module that supports debugging.
- Use Remote SSH debugging to attach to the container on the remote machine. ### Build and deploy your module to an IoT Edge device
To enable Visual Studio Code remote debugging, install the [Remote Development e
For details on how to use Remote SSH debugging in Visual Studio Code, see [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh)
-In the Visual Studio Code Debug view, select the debug configuration file for your module. By default, the **.debug** Dockerfile, module's container `createOptions` settings, and `launch.json` file are configured to use *localhost*.
+In the Visual Studio Code Debug view, select the debug configuration file for your module. By default, the **.debug** Dockerfile, module's container `createOptions` settings, and the `launch.json` file use *localhost*.
Select **Start Debugging** or select **F5**. Select the process to attach to. In the Visual Studio Code Debug view, you see variables in the left panel.
The Docker and Moby engines support SSH connections to containers allowing you t
### Configure Docker SSH tunneling 1. Follow the steps in [Docker SSH tunneling](https://code.visualstudio.com/docs/containers/ssh#_set-up-ssh-tunneling) to configure SSH tunneling on your development computer. SSH tunneling requires public/private key pair authentication and a Docker context defining the remote device endpoint.
-1. Connecting to Docker requires root-level privileges. Follow the steps in [Manage docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall) to allow connection to the Docker daemon on the remote device. When you're finished debugging, you may want to remove your user from the Docker group.
+1. Connecting to Docker requires root-level privileges. Follow the steps in [Manage docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall) to allow connection to the Docker daemon on the remote device. When you finish debugging, you may want to remove your user from the Docker group.
1. In Visual Studio Code, use the Command Palette (Ctrl+Shift+P) to issue the *Docker Context: Use* command to activate the Docker context pointing to the remote machine. This command causes both Visual Studio Code and Docker CLI to use the remote machine context. > [!TIP]
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
For more information, see [TPM attestation device requirements](how-to-provision
IoT Edge and IoT Hub routing syntax is almost identical. Supported query syntax:
-* [Message routing query based on message properties](../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-message-properties)
-* [Message routing query based on message body](../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-message-body)
+* [Message routing query based on message properties](../iot-hub/iot-hub-devguide-routing-query-syntax.md#query-based-on-message-properties)
+* [Message routing query based on message body](../iot-hub/iot-hub-devguide-routing-query-syntax.md#query-based-on-message-body)
Not supported query syntax:
-* [Message routing query based on device twin](../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-device-twin)
+* [Message routing query based on device twin](../iot-hub/iot-hub-devguide-routing-query-syntax.md#query-based-on-device-or-module-twin)
### Restart policies
Changes made in `config.toml` to `edgeAgent` environment variables like the `hos
### NTLM Authentication
-IoT Edge does not currently support network proxies that use NTLM authentication. Users may consider bypassing the proxy by adding the required endpoints to the firewall allow-list.
+IoT Edge does not currently support network proxies that use NTLM authentication. Users may consider bypassing the proxy by adding the required endpoints to the firewall allowlist.
## Next steps
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-csharp-module.md
Currently, Visual Studio Code can develop C# modules for Linux AMD64 and Linux A
### Update the module with custom code
-1. In the Visual Studio Code explorer, open **modules** > **CSharpModule** > **Program.cs**.
+1. In the Visual Studio Code explorer, open **modules** > **CSharpModule** > **ModuleBackgroundService.cs**.
1. At the top of the **CSharpModule** namespace, add three **using** statements for types that are used later:
Currently, Visual Studio Code can develop C# modules for Linux AMD64 and Linux A
} ```
-1. Save the Program.cs file.
+1. Save the ModuleBackgroundService.cs file.
1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
iot-edge Tutorial Develop For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux-on-windows.md
The IoT Edge project template in Visual Studio creates a solution that can be de
Now, you have an IoT Edge project and an IoT Edge module in your Visual Studio solution.
-The module folder contains a file for your module code, named either `program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files provide the information needed to build your module as a Windows or Linux container.
+The module folder contains a file for your module code, named either `Program.cs` or `main.c` depending on the language you chose. This folder also contains a file named `module.json` that describes the metadata of your module. Various Docker files provide the information needed to build your module as a Windows or Linux container.
The project folder contains a list of all the modules included in that project. Right now it should show only one module, but you can add more.
Typically, you'll want to test and debug each module before running it within an
1. Set a breakpoint to inspect the module.
- * If developing in C#, set a breakpoint in the `PipeMessage()` function in **Program.cs**.
+ * If developing in C#, set a breakpoint in the `PipeMessage()` function in **ModuleBackgroundService.cs**.
* If using C, set a breakpoint in the `InputQueue1Callback()` function in **main.c**. 1. The output of the **SimulatedTemperatureSensor** should be redirected to **input1** of the custom Linux C# module. The breakpoint should be triggered. You can watch variables in the Visual Studio **Locals** window.
iot-hub How To Routing Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-portal.md
To learn how to create an Azure Cosmos DB resource, see [Create an Azure Cosmos
* **Collection**: Select your Azure Cosmos DB collection.
- * **Partition key name** and **Partition key template**: These values are created automatically based on your previous selections. You can leave the auto-generated values or you can change the partition template based on your business logic. For more information about partitioning, see [Partitioning and horizontal scaling in Azure Cosmos DB](../cosmos-db/partitioning-overview.md).
+ * **Generate a synthetic partition key for messages**: Select **Enable** if needed.
+
+ To effectively support high-scale scenarios, you can enable [synthetic partition keys](../cosmos-db/nosql/synthetic-partition-keys.md) for the Cosmos DB endpoint. You can specify the partition key property name in **Partition key name**. The partition key property name is defined at the container level and can't be changed once it has been set.
+
+ You can configure the synthetic partition key value by specifying a template in **Partition key template** based on your estimated data volume. The generated partition key value is automatically added to the partition key property for each new Cosmos DB record.
+
+ For more information about partitioning, see [Partitioning and horizontal scaling in Azure Cosmos DB](../cosmos-db/partitioning-overview.md).
:::image type="content" source="media/how-to-routing-portal/add-cosmos-db-endpoint-form.png" alt-text="Screenshot that shows details of the Add a Cosmos DB endpoint form." lightbox="media/how-to-routing-portal/add-cosmos-db-endpoint-form.png":::
+ > [!CAUTION]
+ > If you're using the system assigned managed identity for authenticating to Cosmos DB, you must use Azure CLI or Azure PowerShell to assign the Cosmos DB Built-in Data Contributor built-in role definition to the identity. Role assignment for Cosmos DB isn't currently supported from the Azure portal. For more information about the various roles, see [Configure role-based access for Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
+ 1. Select **Save**. 1. In **Message routing**, on the **Routes** tab, confirm that your new route appears.
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
The result, which would grant access to read all device identities, would be:
### Supported X.509 certificates
-You can use any X.509 certificate to authenticate a device with IoT Hub by uploading either a certificate thumbprint or a certificate authority (CA) to Azure IoT Hub. To learn more, see [Device Authentication using X.509 CA Certificates](iot-hub-x509ca-overview.md). For information about how to upload and verify a certificate authority with your IoT hub, see [Set up X.509 security in your Azure IoT hub](./tutorial-x509-scripts.md).
+You can use any X.509 certificate to authenticate a device with IoT Hub by uploading either a certificate thumbprint or a certificate authority (CA) to Azure IoT Hub. To learn more, see [Device Authentication using X.509 CA Certificates](iot-hub-x509ca-overview.md). For information about how to upload and verify a certificate authority with your IoT hub, see [Set up X.509 security in your Azure IoT hub](./tutorial-x509-prove-possession.md).
### Enforcing X.509 authentication
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-d2c.md
Title: Understand Azure IoT Hub message routing | Microsoft Docs
+ Title: Understand Azure IoT Hub message routing
description: This article describes how to use message routing to send device-to-cloud messages. Includes information about sending both telemetry and non-telemetry data. - Previously updated : 11/21/2022 Last updated : 02/22/2023 # Use IoT Hub message routing to send device-to-cloud messages to different endpoints -
-Message routing enables you to send messages from your devices to cloud services in an automated, scalable, and reliable manner. Message routing can be used for:
+Message routing enables you to send messages from your devices to cloud services in an automated, scalable, and reliable manner. Message routing can be used for:
-* **Sending device telemetry messages as well as events** namely, device lifecycle events, device twin change events, digital twin change events, and device connection state events to the built-in endpoint and custom endpoints. Learn about [routing endpoints](#routing-endpoints). To learn more about the events sent from IoT Plug and Play devices, see [Understand IoT Plug and Play digital twins](../iot-develop/concepts-digital-twin.md).
+* **Sending device telemetry messages and events** to the built-in endpoint and custom endpoints. Events that can be routed include device lifecycle events, device twin change events, digital twin change events, and device connection state events.
-* **Filtering data before routing it to various endpoints** by applying rich queries. Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. Learn more about using [queries in message routing](iot-hub-devguide-routing-query-syntax.md).
+* **Filtering data before routing it to various endpoints** by applying rich queries. Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. For more information, see [queries in message routing](iot-hub-devguide-routing-query-syntax.md).
-IoT Hub needs write access to these service endpoints for message routing to work. If you configure your endpoints through the Azure portal, the necessary permissions are added for you. Make sure you configure your services to support the expected throughput. For example, if you're using Event Hubs as a custom endpoint, you must configure the **throughput units** for that event hub so it can handle the ingress of events you plan to send via IoT Hub message routing. Similarly, when using a Service Bus Queue as an endpoint, you must configure the **maximum size** to ensure the queue can hold all the data ingressed, until it's egressed by consumers. When you first configure your IoT solution, you may need to monitor your other endpoints and make any necessary adjustments for the actual load.
+The IoT Hub defines a [common format](iot-hub-devguide-messages-construct.md) for all device-to-cloud messaging for interoperability across protocols.
-The IoT Hub defines a [common format](iot-hub-devguide-messages-construct.md) for all device-to-cloud messaging for interoperability across protocols. If a message matches multiple routes that point to the same endpoint, IoT Hub delivers message to that endpoint only once. Therefore, you don't need to configure deduplication on your Service Bus queue or topic. Use this tutorial to learn how to [configure message routing](tutorial-routing.md).
## Routing endpoints
-An IoT hub has a default built-in endpoint (**messages/events**) that is compatible with Event Hubs. You can create [custom endpoints](iot-hub-devguide-endpoints.md#custom-endpoints) to route messages to by linking other services in your subscription to the IoT hub.
-
-Each message is routed to all endpoints whose routing queries it matches. In other words, a message can be routed to multiple endpoints.
+Each IoT hub has a default built-in endpoint (**messages/events**) that is compatible with Event Hubs. You also can create [custom endpoints](iot-hub-devguide-endpoints.md#custom-endpoints) that point to other services in your Azure subscription.
-If your custom endpoint has firewall configurations, consider using the [Microsoft trusted first party exception.](./virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources)
+Each message is routed to all endpoints whose routing queries it matches. In other words, a message can be routed to multiple endpoints. If a message matches multiple routes that point to the same endpoint, IoT Hub delivers the message to that endpoint only once.
IoT Hub currently supports the following endpoints:
-
-## Built-in endpoint as a routing endpoint
+* Built-in endpoint
+* Storage containers
+* Service Bus queues
+* Service Bus topics
+* Event Hubs
+* Cosmos DB (preview)
-You can use standard [Event Hubs integration and SDKs](iot-hub-devguide-messages-read-builtin.md) to receive device-to-cloud messages from the built-in endpoint (**messages/events**). Once a route is created, data stops flowing to the built-in endpoint unless a route is created to that endpoint. Even if no routes are created, a fallback route must be enabled to route messages to the built-in endpoint. The fallback is enabled by default if you create your hub using the portal or the CLI.
+IoT Hub needs write access to these service endpoints for message routing to work. If you configure your endpoints through the Azure portal, the necessary permissions are added for you. If you configure your endpoints using PowerShell or the Azure CLI, you need to provide the write access permission.
+
+To learn how to create endpoints, see the following articles:
+
+* [Manage routes and endpoints using the Azure portal](how-to-routing-portal.md)
+* [Manage routes and endpoints using the Azure CLI](how-to-routing-azure-cli.md)
+* [Manage routes and endpoints using PowerShell](how-to-routing-powershell.md)
+* [Manage routes and endpoints using Azure Resource Manager](how-to-routing-arm.md)
-## Azure Storage as a routing endpoint
+Make sure you configure your services to support the expected throughput. For example, if you're using Event Hubs as a custom endpoint, you must configure the **throughput units** for that event hub so it can handle the ingress of events you plan to send via IoT Hub message routing. Similarly, when using a Service Bus queue as an endpoint, you must configure the **maximum size** to ensure the queue can hold all the data ingressed, until it's egressed by consumers. When you first configure your IoT solution, you may need to monitor your other endpoints and make any necessary adjustments for the actual load.
-There are two storage services IoT Hub can route messages to: [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) and [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (ADLS Gen2) accounts. Azure Data Lake Storage accounts are [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md)-enabled storage accounts built on top of blob storage. Both of these use blobs for their storage.
+If your custom endpoint has firewall configurations, consider using the [Microsoft trusted first party exception.](./virtual-network-support.md#egress-connectivity-from-iot-hub-to-other-azure-resources)
-IoT Hub supports writing data to Azure Storage in the [Apache Avro](https://avro.apache.org/) format and the JSON format. The default is AVRO. When using JSON encoding, you must set the contentType property to **application/json** and contentEncoding property to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these values are case-insensitive. If the content encoding isn't set, then IoT Hub will write the messages in base 64 encoded format.
+### Built-in endpoint
-The encoding format can be only set when the blob storage endpoint is configured; it can't be edited for an existing endpoint. To switch encoding formats for an existing endpoint, you'll need to first delete the endpoint, and then re-create it with the format you want. One helpful strategy might be to create a new custom endpoint with your desired encoding format and add a parallel route to that endpoint. In this way, you can verify your data before deleting the existing endpoint.
+You can use standard [Event Hubs integration and SDKs](iot-hub-devguide-messages-read-builtin.md) to receive device-to-cloud messages from the built-in endpoint (**messages/events**). Once a route is created, data stops flowing to the built-in endpoint unless a route is created to that endpoint. Even if no routes are created, a fallback route must be enabled to route messages to the built-in endpoint. The fallback is enabled by default if you create your hub using the portal or the CLI.
-You can select the encoding format using the IoT Hub Create or Update REST API, specifically the [RoutingStorageContainerProperties](/rest/api/iothub/iothubresource/createorupdate#routingstoragecontainerproperties), the [Azure portal](https://portal.azure.com), [Azure CLI](/cli/azure/iot/hub/routing-endpoint), or [Azure PowerShell](/powershell/module/az.iothub/add-aziothubroutingendpoint). The following image shows how to select the encoding format in the Azure portal.
+### Azure Storage as a routing endpoint
+There are two storage services IoT Hub can route messages to: [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) and [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (ADLS Gen2) accounts. Azure Data Lake Storage accounts are [hierarchical namespace-enabled](../storage/blobs/data-lake-storage-namespace.md) storage accounts built on top of blob storage. Both of these use blobs for their storage.
-IoT Hub batches messages and writes data to storage whenever the batch reaches a certain size or a certain amount of time has elapsed. IoT Hub defaults to the following file naming convention:
+IoT Hub supports writing data to Azure Storage in the [Apache Avro](https://avro.apache.org/) format and the JSON format. The default is AVRO. When using JSON encoding, you must set the contentType property to **application/json** and contentEncoding property to **UTF-8** in the message [system properties](iot-hub-devguide-routing-query-syntax.md#system-properties). Both of these values are case-insensitive. If the content encoding isn't set, then IoT Hub writes the messages in base 64 encoded format.
-```
-{iothub}/{partition}/{YYYY}/{MM}/{DD}/{HH}/{mm}
-```
+The encoding format can be set only when the blob storage endpoint is configured; it can't be edited for an existing endpoint
-You may use any file naming convention, however you must use all listed tokens. IoT Hub will write to an empty blob if there's no data to write.
+IoT Hub batches messages and writes data to storage whenever the batch reaches a certain size or a certain amount of time has elapsed. IoT Hub defaults to the following file naming convention: `{iothub}/{partition}/{YYYY}/{MM}/{DD}/{HH}/{mm}`.
-We recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a [Microsoft-initiated failover](iot-hub-ha-dr.md#microsoft-initiated-failover) or IoT Hub [manual failover](iot-hub-ha-dr.md#manual-failover). You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/path) for the list of files. See the following sample as guidance.
+You may use any file naming convention, however you must use all listed tokens. IoT Hub writes to an empty blob if there's no data to write.
+
+We recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a [Microsoft-initiated failover](iot-hub-ha-dr.md#microsoft-initiated-failover) or IoT Hub [manual failover](iot-hub-ha-dr.md#manual-failover). You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/path) for the list of files. For example:
```csharp public void ListBlobsInContainer(string containerName, string iothub)
public void ListBlobsInContainer(string containerName, string iothub)
To create an Azure Data Lake Gen2-compatible storage account, create a new V2 storage account and select **Enable hierarchical namespace** from the **Data Lake Storage Gen2** section of the **Advanced** tab, as shown in the following image:
-## Service Bus Queues and Service Bus Topics as a routing endpoint
+### Service Bus queues and Service Bus topics as a routing endpoint
Service Bus queues and topics used as IoT Hub endpoints must not have **Sessions** or **Duplicate Detection** enabled. If either of those options are enabled, the endpoint appears as **Unreachable** in the Azure portal.
-## Event Hubs as a routing endpoint
+### Event Hubs as a routing endpoint
-Apart from the built-in-Event Hubs compatible endpoint, you can also route data to custom endpoints of type Event Hubs.
+Apart from the built-in-Event Hubs compatible endpoint, you can also route data to custom endpoints of type Event Hubs.
-## Azure Cosmos DB as a routing endpoint (preview)
+### Azure Cosmos DB as a routing endpoint (preview)
You can send data directly to Azure Cosmos DB from IoT Hub. Cosmos DB is a fully managed hyperscale multi-model database service. It provides low latency and high availability, making it a great choice for scenarios like connected solutions and manufacturing that require extensive downstream data analysis.
-IoT Hub supports writing to Cosmos DB in JSON (if specified in the message content-type) or as Base64 encoded binary. You can set up a Cosmos DB endpoint for message routing by performing the following steps in the Azure portal:
-
-1. Navigate to your provisioned IoT hub.
-1. In the resource menu, select **Message routing** from **Hub settings**.
-1. Select the **Custom endpoints** tab in the working pane, then select **Add** and choose **Cosmos DB (preview)** from the dropdown list.
+IoT Hub supports writing to Cosmos DB in JSON (if specified in the message content-type) or as Base64 encoded binary.
- The following image shows the endpoint addition options in the working pane of Azure portal:
+To effectively support high-scale scenarios, you can enable [synthetic partition keys](../cosmos-db/nosql/synthetic-partition-keys.md) for the Cosmos DB endpoint. As Cosmos DB is a hyperscale data store, all data/documents written to it must contain a field that represents a logical partition. Each logical partition has a maximum size of 20 GB. You can specify the partition key property name in **Partition key name**. The partition key property name is defined at the container level and can't be changed once it has been set.
- :::image type="content" alt-text="Screenshot that shows how to add a Cosmos DB endpoint." source="media/iot-hub-devguide-messages-d2c/add-cosmos-db-endpoint.png":::
-
-1. Type a name for your Cosmos DB endpoint in **Endpoint name**.
-1. In **Cosmos DB account**, choose an existing Cosmos DB account from a list of Cosmos DB accounts available for selection, then select an existing database and collection in **Database** and **Collection**, respectively.
-1. In **Generate a synthetic partition key for messages**, select **Enable** if needed.
+You can configure the synthetic partition key value by specifying a template in **Partition key template** based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its maximum limit of 20 GB within a month. In that case, you can define a synthetic partition key as a combination of the device ID and the month. The generated partition key value is automatically added to the partition key property for each new Cosmos DB record, ensuring logical partitions are created each month for each device.
- To effectively support high-scale scenarios, you can enable [synthetic partition keys](../cosmos-db/nosql/synthetic-partition-keys.md) for the Cosmos DB endpoint. As Cosmos DB is a hyperscale data store, all data/documents written to it must contain a field that represents a logical partition. Each logical partition has a maximum size of 20 GB. You can specify the partition key property name in **Partition key name**. The partition key property name is defined at the container level and can't be changed once it has been set.
+> [!CAUTION]
+> If you're using the system assigned managed identity for authenticating to Cosmos DB, you must use Azure CLI or Azure PowerShell to assign the Cosmos DB Built-in Data Contributor built-in role definition to the identity. Role assignment for Cosmos DB isn't currently supported from the Azure portal. For more information about the various roles, see [Configure role-based access for Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
- You can configure the synthetic partition key value by specifying a template in **Partition key template** based on your estimated data volume. For example, in manufacturing scenarios, your logical partition might be expected to approach its maximum limit of 20 GB within a month. In that case, you can define a synthetic partition key as a combination of the device ID and the month. The generated partition key value is automatically added to the partition key property for each new Cosmos DB record, ensuring logical partitions are created each month for each device.
+## Routing queries
-1. In **Authentication type**, choose an authentication type for your Cosmos DB endpoint. You can choose any of the supported authentication types for accessing the database, based on your system setup.
+IoT Hub message routing provides a querying capability to filter the data before routing it to the endpoints. Each routing query you configure has the following properties:
- > [!CAUTION]
- > If you're using the system assigned managed identity for authenticating to Cosmos DB, you must use Azure CLI or Azure PowerShell to assign the Cosmos DB Built-in Data Contributor built-in role definition to the identity. Role assignment for Cosmos DB isn't currently supported from the Azure portal. For more information about the various roles, see [Configure role-based access for Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md). To understand assigning roles via CLI, see [Manage Azure Cosmos DB SQL role resources.](/cli/azure/cosmosdb/sql/role)
+| Property | Description |
+| - | -- |
+| **Name** | The unique name that identifies the query. |
+| **Source** | The origin of the data stream to be acted upon. For example, device telemetry. |
+| **Condition** | The query expression for the routing query that is run against the message application properties, system properties, message body, device twin tags, and device twin properties to determine if it's a match for the endpoint. For more information about constructing a query, see the see [message routing query syntax](iot-hub-devguide-routing-query-syntax.md) |
+| **Endpoint** | The name of the endpoint where IoT Hub sends messages that match the query. We recommend that you choose an endpoint in the same region as your IoT hub. |
-1. Select **Create** to complete the creation of your custom endpoint.
+A single message may match the condition on multiple routing queries, in which case IoT Hub delivers the message to the endpoint associated with each matched query. IoT Hub also automatically deduplicates message delivery, so if a message matches multiple queries that have the same destination, it's only written once to that destination.
-To learn more about using the Azure portal to create message routes and endpoints for your IoT hub, see [Message routing with IoT Hub ΓÇö Azure portal](how-to-routing-portal.md).
+For more information, see [IoT Hub message routing query syntax](./iot-hub-devguide-routing-query-syntax.md).
-## Reading data that has been routed
+## Read data that has been routed
-You can configure a route by following this [tutorial](tutorial-routing.md).
+Use the following articles to learn how to read messages from an endpoint.
-Use the following tutorials to learn how to read messages from an endpoint.
+* Read from a [built-in endpoint](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
-* Reading from a [built-in endpoint](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
+* Read from [Blob storage](../storage/blobs/storage-blob-event-quickstart.md)
-* Reading from [Blob storage](../storage/blobs/storage-blob-event-quickstart.md)
+* Read from [Event Hubs](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)
-* Reading from [Event Hubs](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md)
+* Read from [Service Bus queues](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md)
-* Reading from [Service Bus Queues](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md)
-
-* Read from [Service Bus Topics](../service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md)
+* Read from [Service Bus topics](../service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md)
## Fallback route
-The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in endpoint (**messages/events**), which is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is enabled, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in endpoint, unless a route is created to that endpoint. If there are no routes to the built-in endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in endpoint. Also, if all existing routes are deleted, fallback route capability must be enabled to receive all data at the built-in endpoint.
+The fallback route sends all the messages that don't satisfy query conditions on any of the existing routes to the built-in endpoint (**messages/events**), which is compatible with [Event Hubs](../event-hubs/index.yml). If message routing is enabled, you can enable the fallback route capability. Once a route is created, data stops flowing to the built-in endpoint, unless a route is created to that endpoint. If there are no routes to the built-in endpoint and a fallback route is enabled, only messages that don't match any query conditions on routes will be sent to the built-in endpoint. Also, if all existing routes are deleted, the fallback route capability must be enabled to receive all data at the built-in endpoint.
You can enable or disable the fallback route in the Azure portal, from the **Message routing** blade. You can also use Azure Resource Manager for [FallbackRouteProperties](/rest/api/iothub/iothubresource/createorupdate#fallbackrouteproperties) to use a custom endpoint for the fallback route.
For example, if a route is created with the data source set to **Device Twin Cha
[IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario.
-## Limitations for device connection state events
+### Limitations for device connection state events
Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these operations equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md). IoT Hub doesn't report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic, 60-second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60-second window.
-## Testing routes
+## Test routes
When you create a new route or edit an existing route, you should test the route query with a sample message. You can test individual routes or test all routes at once and no messages are routed to the endpoints during the test. Azure portal, Azure Resource Manager, Azure PowerShell, and Azure CLI can be used for testing. Outcomes help identify whether the sample message matched or didn't match the query, or if the test couldn't run because the sample message or query syntax are incorrect. To learn more, see [Test Route](/rest/api/iothub/iothubresource/testroute) and [Test All Routes](/rest/api/iothub/iothubresource/testallroutes).
When you route device-to-cloud telemetry messages using built-in endpoints, ther
In most cases, the average increase in latency is less than 500 milliseconds. However, the latency you experience can vary and can be higher depending on the tier of your IoT hub and your solution architecture. You can monitor the latency using the **Routing: message latency for messages/events** or **d2c.endpoints.latency.builtIn.events** IoT Hub metrics. Creating or deleting any route after the first one doesn't impact the end-to-end latency.
-## Monitoring and troubleshooting
+## Monitor and troubleshoot
IoT Hub provides several metrics related to routing and endpoints to give you an overview of the health of your hub and messages sent. For a list of all of the IoT Hub metrics broken out by functional category, see the [Metrics](monitor-iot-hub-reference.md#metrics) section of [Monitoring Azure IoT Hub data reference](monitor-iot-hub-reference.md). You can track errors that occur during evaluation of a routing query and endpoint health as perceived by IoT Hub with the [**routes** category in IoT Hub resource logs](monitor-iot-hub-reference.md#routes). To learn more about using metrics and resource logs with IoT Hub, see [Monitoring Azure IoT Hub](monitor-iot-hub.md).
Use the [troubleshooting guide for routing](troubleshoot-message-routing.md) for
## Next steps
-* To learn how to create message routes, see [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md).
-
-* [How to send device-to-cloud messages](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs)
+To learn how to create message routes, see:
-* For information about the SDKs you can use to send device-to-cloud messages, see [Azure IoT SDKs](iot-hub-devguide-sdks.md).
+* [Create and delete routes and endpoints by using the Azure portal](./how-to-routing-portal.md)
+* [Create and delete routes and endpoints by using the Azure CLI](./how-to-routing-azure-cli.md)
iot-hub Iot Hub Devguide Messages Read Builtin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md
You can use the Event Hubs SDKs to read from the built-in endpoint in environmen
For more information, see the [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial.
-* If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md).
+* If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-d2c.md).
iot-hub Iot Hub Devguide Messages Read Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-custom.md
- Title: Understand Azure IoT Hub custom endpoints | Microsoft Docs
-description: This article describes using routing queries to route device-to-cloud messages to custom endpoints.
------ Previously updated : 04/09/2018--
-# Use message routes and custom endpoints for device-to-cloud messages
--
-IoT Hub [Message Routing](iot-hub-devguide-routing-query-syntax.md) enables users to route device-to-cloud messages to service-facing endpoints. Routing also provides a querying capability to filter the data before routing it to the endpoints. Each routing query you configure has the following properties:
-
-| Property | Description |
-| - | -- |
-| **Name** | The unique name that identifies the query. |
-| **Source** | The origin of the data stream to be acted upon. For example, device telemetry. |
-| **Condition** | The query expression for the routing query that is run against the message application properties, system properties, message body, device twin tags, and device twin properties to determine if it is a match for the endpoint. For more information about constructing a query, see the see [message routing query syntax](iot-hub-devguide-routing-query-syntax.md) |
-| **Endpoint** | The name of the endpoint where IoT Hub sends messages that match the query. We recommend that you choose an endpoint in the same region as your IoT hub. |
-
-A single message may match the condition on multiple routing queries, in which case IoT Hub delivers the message to the endpoint associated with each matched query. IoT Hub also automatically deduplicates message delivery, so if a message matches multiple queries that have the same destination, it is only written once to that destination.
-
-## Endpoints and routing
-
-An IoT hub has a default [built-in endpoint](iot-hub-devguide-messages-read-builtin.md). You can create custom endpoints to route messages to by linking other services in the subscriptions you own to the hub. IoT Hub currently supports Azure Storage containers, Event Hubs, Service Bus queues, and Service Bus topics as custom endpoints.
-
-When you use routing and custom endpoints, messages are only delivered to the built-in endpoint if they don't match any query. To deliver messages to the built-in endpoint as well as to a custom endpoint, add a route that sends messages to the built-in **events** endpoint.
-
-> [!NOTE]
-> * IoT Hub only supports writing data to Azure Storage containers as blobs.
-> * Service Bus queues and topics with **Sessions** or **Duplicate Detection** enabled are not supported as custom endpoints.
-> * In the Azure portal, you can create custom routing endpoints only to Azure resources that are in the same subscription as your IoT hub. You can create custom endpoints for resources in other subscriptions by using either the [Azure CLI](./tutorial-routing.md) or Azure Resource Manager.
-
-For more information about creating custom endpoints in IoT Hub, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md).
-
-For more information about reading from custom endpoints, see:
-
-* Reading from [Storage containers](../storage/blobs/storage-blobs-introduction.md).
-* Reading from [Event Hubs](../event-hubs/event-hubs-dotnet-standard-getstarted-send.md).
-
-* Reading from [Service Bus queues](../service-bus-messaging/service-bus-dotnet-get-started-with-queues.md).
-
-* Reading from [Service Bus topics](../service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md).
-* Reading from [Cosmos DB](../cosmos-db/nosql/query/getting-started.md)
-## Next steps
-
-* For more information about IoT Hub endpoints, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md).
-
-* For more information about the query language you use to define routing queries, see [Message Routing query syntax](iot-hub-devguide-routing-query-syntax.md).
-
-* The [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial shows you how to use routing queries and custom endpoints.
iot-hub Iot Hub Devguide Routing Query Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-routing-query-syntax.md
Title: Query on Azure IoT Hub message routing | Microsoft Docs
+ Title: Query on Azure IoT Hub message routing
description: Learn about the IoT Hub message routing query language that you can use to apply rich queries to messages to receive the data that matters to you. Previously updated : 12/27/2022 Last updated : 02/22/2023
Message routing enables users to route different data types, including device te
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
-Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. If the message body isn't in JavaScript Object Notation (JSON) format, message routing can still route the message, but queries can't be applied to the message body. Queries are described as Boolean expressions where, if true, the query succeeds and routes all the incoming data; otherwise, the query fails and the incoming data isn't routed. If the expression evaluates to a null or undefined value, it's treated as a Boolean false value and an error is generated in IoT Hub [routes resource logs](monitor-iot-hub-reference.md#routes). The query syntax must be correct for the route to be saved and evaluated.
+Message routing allows you to query on the message properties and message body as well as device twin tags and device twin properties. If the message body isn't in JSON format, message routing can still route the message, but queries can't be applied to the message body. Queries are described as Boolean expressions where, if true, the query succeeds and routes all the incoming data; otherwise, the query fails and the incoming data isn't routed. If the expression evaluates to a null or undefined value, it's treated as a Boolean false value, and generates an error in the IoT Hub [routes resource logs](monitor-iot-hub-reference.md#routes). The query syntax must be correct for the route to be saved and evaluated.
-## Message routing query based on message properties
+## Query based on message properties
IoT Hub defines a [common format](iot-hub-devguide-messages-construct.md) for all device-to-cloud messaging for interoperability across protocols. IoT Hub assumes the following JSON representation of the message. System properties are added for all users and identify content of the message. Users can selectively add application properties to the message. We recommend using unique property names because IoT Hub device-to-cloud messaging isn't case-sensitive. For example, if you have multiple properties with the same name, IoT Hub will only send one of the properties.
IoT Hub defines a [common format](iot-hub-devguide-messages-construct.md) for al
### System properties
-System properties help identify the contents and source of the messages, some of which are described in the following table.
+System properties help identify the contents and source of the messages, some of which are described in the following table:
| Property | Type | Description | | -- | - | -- | | contentType | string | The user specifies the content type of the message. To allow querying on the message body, this value should be set to `application/JSON`. |
-| contentEncoding | string | The user specifies the encoding type of the message. Allowed values are `UTF-8`, `UTF-16`, and `UTF-32` if the contentType property is set to `application/JSON`. |
+| contentEncoding | string | The user specifies the encoding type of the message. If the contentType property is set to `application/JSON`, then allowed values are `UTF-8`, `UTF-16`, and `UTF-32`. |
| iothub-connection-device-id | string | This value is set by IoT Hub and identifies the ID of the device. To query, use `$connectionDeviceId`. | | iothub-connection-module-id | string | This value is set by IoT Hub and identifies the ID of the edge module. To query, use `$connectionModuleId`. | | iothub-enqueuedtime | string | This value is set by IoT Hub and represents the actual time of enqueuing the message in UTC. To query, use `$enqueuedTime`. | | dt-dataschema | string | This value is set by IoT Hub on device-to-cloud messages. It contains the device model ID set in the device connection. To query, use `$dt-dataschema`. | | dt-subject | string | The name of the component that is sending the device-to-cloud messages. To query, use `$dt-subject`. |
-The previous table describes only some of the system properties available in a message. For more information about the other available system properties, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
+For more information about the other available system properties, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
### Application properties Application properties are user-defined strings that can be added to the message. These fields are optional.
-### Query expressions
+### Message properties query expressions
A query on message system properties must be prefixed with the `$` symbol. Queries on application properties are accessed with their name and shouldn't be prefixed with the `$`symbol. If an application property name begins with `$`, then IoT Hub first searches for it in the system properties, and if it's not found will then search for it in the application properties. The following examples show how to query on system properties and application properties.
To combine these queries, you can use Boolean expressions and functions:
$contentEncoding = 'UTF-8' AND processingPath = 'hot' ```
-A full list of supported operators and functions is provided in the [Expression and conditions](iot-hub-devguide-query-language.md#expressions-and-conditions) section of [IoT Hub query language for device and module twins, jobs, and message routing](iot-hub-devguide-query-language.md).
+A full list of supported operators and functions is provided in the [expression and conditions](iot-hub-devguide-query-language.md#expressions-and-conditions) section of [IoT Hub query language for device and module twins, jobs, and message routing](iot-hub-devguide-query-language.md).
-## Message routing query based on message body
+## Query based on message body
-To enable querying on a message body, the message should be in a JSON format and encoded in either UTF-8, UTF-16 or UTF-32. The `contentType` system property must be set to `application/JSON`. The `contentEncoding` system property must be set to one of the UTF encoding values supported by that system property. If these system properties aren't specified, IoT Hub won't evaluate the query expression on the message body.
+To enable querying on a message body, the message should be in a JSON format and encoded in either UTF-8, UTF-16 or UTF-32. The `contentType` system property must be `application/JSON`. The `contentEncoding` system property must be one of the UTF encoding values supported by that system property. If these system properties aren't specified, IoT Hub won't evaluate the query expression on the message body.
-The following example shows how to create a message with a properly formed and encoded JSON body:
+The following JavaScript example shows how to create a message with a properly formed and encoded JSON body:
```javascript var messageBody = JSON.stringify(Object.assign({}, {
deviceClient.sendEvent(message, (err, res) => {
}); ```
-> [!NOTE]
-> This shows how to handle the encoding of the message body in JavaScript. If you want to see a sample in C#, download the [Microsoft Azure IoT SDK for .NET](https://github.com/Azure/azure-iot-sdk-csharp/archive/main.zip), and then expand the compressed folder (azure-iot-sdk-csharp-main.zip). The Program.cs file for the *HubRoutingSample* device sample, in the *\iothub\device\samples\how to guides\HubRoutingSample* subfolder of the SDK, shows how to encode and submit messages to an IoT hub. This is the same sample used for testing the message routing, as explained in the [Message Routing tutorial](tutorial-routing.md). The Program.cs file also has a method named `ReadOneRowFromFile`, which reads one of the encoded files, decodes it, and writes it back out as ASCII so you can read it.
+For a message encoding sample in C#, see the [HubRoutingSample](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples/how%20to%20guides/HubRoutingSample) provided in the Microsoft Azure IoT SDK for .NET. This sample is the same one used in the [Message Routing tutorial](tutorial-routing.md). The Program.cs file also has a method named `ReadOneRowFromFile`, which reads one of the encoded files, decodes it, and writes it back out as ASCII so you can read it.
-### Query expressions
+### Message body query expressions
A query on a message body needs to be prefixed with `$body`. You can use a body reference, body array reference, or multiple body references in the query expression. Your query expression can also combine a body reference with a message system properties reference or a message application properties reference. For example, the following examples are all valid query expressions:
length($body.Weather.Location.State) = 2
$body.Weather.Temperature = 50 AND processingPath = 'hot' ```
-> [!NOTE]
-> To filter a twin notification payload based on what changed, run your query on the message body. For example, to filter when there is a desired property change on `sendFrequency` and the value is greater than 10:
->
-> ```sql
-> $body.properties.desired.telemetryConfig.sendFrequency > 10
-> ```
-> To filter messages that contains a property change, no matter the value of the property, you can use the `is_defined()` function (when the value is a primitive type):
->
-> ```sql
-> is_defined($body.properties.desired.telemetryConfig.sendFrequency)
-> ```
+You can run queries and functions only on properties in the body reference. You can't run queries or functions on the entire body reference. For example, the following query is *not* supported and returns `undefined`:
-> [!NOTE]
-> You can run queries and functions only on properties in the body reference. You can't run queries or functions on the entire body reference. For example, the following query is *not* supported and returns `undefined`:
->
-> ```sql
-> $body[0] = 'Feb'
-> ```
+```sql
+$body[0] = 'Feb'
+```
-## Message routing query based on device twin
+To filter a twin notification payload based on what changed, run your query on the message body. For example, to filter when there's a desired property change on `sendFrequency` and the value is greater than 10:
+
+```sql
+$body.properties.desired.telemetryConfig.sendFrequency > 10
+```
-Message routing enables you to query on [Device Twin](iot-hub-devguide-device-twins.md) tags and properties, which are JSON objects. Querying on module twin is also supported. The following sample illustrates a query on device twin tags and properties.
+To filter messages that contains a property change, no matter the value of the property, you can use the `is_defined()` function (when the value is a primitive type):
+
+```sql
+is_defined($body.properties.desired.telemetryConfig.sendFrequency)
+```
+
+## Query based on device or module twin
+
+Message routing enables you to query on [device twin](iot-hub-devguide-device-twins.md) or [module twin](iot-hub-devguide-module-twins.md) tags and properties, which are JSON objects. The following sample illustrates a device twin with tags and properties:
```JSON {
Message routing enables you to query on [Device Twin](iot-hub-devguide-device-tw
> [!NOTE] > Modules do not inherit twin tags from their corresponding devices. Twin queries for messages originating from device modules (for example, from IoT Edge modules) query against the module twin and not the corresponding device twin.
-### Query expressions
+### Twin query expressions
A query on a device twin or module twin needs to be prefixed with `$twin`. Your query expression can also combine a twin tag or property reference with a body reference, a message system properties reference, or a message application properties reference. We recommend using unique names in tags and properties because the query isn't case-sensitive. This recommendation applies to both device twins and module twins. We also recommend that you avoid using `twin`, `$twin`, `body`, or `$body` as property names. For example, the following examples are all valid query expressions:
$twin.tags.deploymentLocation.floor = 1
Routing queries don't support using whitespace or any of the following characters in property names, the message body path, or the device/module twin path: `()<>@,;:\"/?={}`. - ## Next steps * Learn about [message routing](iot-hub-devguide-messages-d2c.md).
iot-hub Iot Hub Devguide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide.md
The following articles can help you get started exploring IoT Hub features in mo
* [Read device-to-cloud messages from the built-in endpoint](iot-hub-devguide-messages-read-builtin.md).
- * [Use custom endpoints and routing rules for device-to-cloud messages](iot-hub-devguide-messages-read-custom.md).
- * [Send cloud-to-device messages from IoT Hub](iot-hub-devguide-messages-c2d.md). * [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub Iot Hub Event Grid Routing Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-event-grid-routing-comparison.md
Title: Compare Event Grid, routing for IoT Hub | Microsoft Docs
+ Title: Compare Event Grid, routing for IoT Hub
+ description: IoT Hub offers its own message routing service, but also integrates with Event Grid for event publishing. Compare the two features. Previously updated : 02/20/2019 Last updated : 02/22/2023
Azure IoT Hub provides the capability to stream data from your connected devices
[!INCLUDE [iot-hub-basic](../../includes/iot-hub-basic-partial.md)]
-**[IoT Hub message routing](iot-hub-devguide-messages-d2c.md)**: This IoT Hub feature enables users to route device-to-cloud messages to service endpoints like Azure Storage containers, Event Hubs, Service Bus queues, and Service Bus topics. Routing also provides a querying capability to filter the data before routing it to the endpoints. In addition to device telemetry data, you can also send [non-telemetry events](iot-hub-devguide-messages-d2c.md#non-telemetry-events) that can be used to trigger actions.
+**[IoT Hub message routing](iot-hub-devguide-messages-d2c.md)**: This IoT Hub feature enables users to route device-to-cloud messages to service endpoints like Azure Storage containers, Event Hubs, Service Bus queues, and Service Bus topics. Routing also provides a querying capability to filter the data before routing it to the endpoints. In addition to device telemetry data, you can also [route non-telemetry events](iot-hub-devguide-messages-d2c.md#non-telemetry-events) and use them to trigger actions.
**IoT Hub integration with Event Grid**: Azure Event Grid is a fully managed event routing service that uses a publish-subscribe model. IoT Hub and Event Grid work together to [integrate IoT Hub events into Azure and non-Azure services](iot-hub-event-grid.md), in near-real time. IoT Hub publishes both [device events](iot-hub-event-grid.md#event-types) and telemetry events.
While both message routing and Event Grid enable alert configuration, there are
| Feature | IoT Hub message routing | IoT Hub integration with Event Grid | | - | | - |
-| **Device messages and events** | Yes, message routing can be used for telemetry data, device twin changes, device lifecycle events, digital twin change events, and device connection state events. | Yes, Event Grid can be used for telemetry data and device events like device created/deleted/connected/disconnected. But Event grid cannot be used for device twin change events and digital twin change events. |
-| **Ordering** | Yes, ordering of events is maintained. | No, order of events is not guaranteed. |
-| **Filtering** | Rich filtering on message application properties, message system properties, message body, device twin tags, and device twin properties. Filtering isn't applied to digital twin change events. For examples, see [Message Routing Query Syntax](iot-hub-devguide-routing-query-syntax.md). | Filtering based on event type, subject type and attributes in each event. For examples, see [Understand filtering events in Event Grid Subscriptions](../event-grid/event-filtering.md). When subscribing to telemetry events, you can apply additional filters on the data to filter on message properties, message body and device twin in your IoT Hub, before publishing to Event Grid. See [how to filter events](../iot-hub/iot-hub-event-grid.md#filter-events). |
-| **Endpoints** | <ul><li>Event Hubs</li> <li>Azure Blob Storage</li> <li>Service Bus queue</li> <li>Service Bus topics</li></ul><br>Paid IoT Hub SKUs (S1, S2, and S3) are limited to 10 custom endpoints. 100 routes can be created per IoT Hub. | <ul><li>Azure Functions</li> <li>Azure Automation</li> <li>Event Hubs</li> <li>Logic Apps</li> <li>Storage Blob</li> <li>Custom Topics</li> <li>Queue Storage</li> <li>Power Automate</li> <li>Third-party services through WebHooks</li></ul><br>500 endpoints per IoT Hub are supported. For the most up-to-date list of endpoints, see [Event Grid event handlers](../event-grid/overview.md#event-handlers). |
-| **Cost** | There is no separate charge for message routing. Only ingress of telemetry into IoT Hub is charged. For example, if you have a message routed to three different endpoints, you are billed for only one message. | There is no charge from IoT Hub. Event Grid offers the first 100,000 operations per month for free, and then $0.60 per million operations afterwards. |
+| **Device messages and events** | Yes, message routing supports telemetry data, device twin changes, device lifecycle events, digital twin change events, and device connection state events. | Yes, Event Grid supports telemetry data and device events like device created/deleted/connected/disconnected. But Event Grid doesn't support device twin change events and digital twin change events. |
+| **Ordering** | Yes, message routing maintains the order of events. | No, Event Grid doesn't guarantee the order of events. |
+| **Filtering** | Rich filtering on message application properties, message system properties, message body, device twin tags, and device twin properties. Filtering isn't applied to digital twin change events. For examples, see [Message Routing Query Syntax](iot-hub-devguide-routing-query-syntax.md). | Filtering based on event type, subject type and attributes in each event. For examples, see [Understand filtering events in Event Grid Subscriptions](../event-grid/event-filtering.md). When subscribing to telemetry events, you can apply filters on the data to filter on message properties, message body and device twin in your IoT Hub, before publishing to Event Grid. See [how to filter events](../iot-hub/iot-hub-event-grid.md#filter-events). |
+| **Endpoints** | <ul><li>Event Hubs</li> <li>Azure Blob Storage</li> <li>Service Bus queue</li> <li>Service Bus topics</li><li>Cosmos DB (preview)</li></ul><br>Paid IoT Hub SKUs (S1, S2, and S3) can have 10 custom endpoints and 100 routes per IoT Hub. | <ul><li>Azure Functions</li> <li>Azure Automation</li> <li>Event Hubs</li> <li>Logic Apps</li> <li>Storage Blob</li> <li>Custom Topics</li> <li>Queue Storage</li> <li>Power Automate</li> <li>Third-party services through WebHooks</li></ul><br>Event Grid supports 500 endpoints per IoT Hub. For the most up-to-date list of endpoints, see [Event Grid event handlers](../event-grid/overview.md#event-handlers). |
+| **Cost** | There is no separate charge for message routing. Only ingress of telemetry into IoT Hub is charged. For example, if you have a message routed to three different endpoints, you're billed for only one message. | There is no charge from IoT Hub. Event Grid offers the first 100,000 operations per month for free, and then $0.60 per million operations afterwards. |
## Similarities
IoT Hub message routing and Event Grid have similarities too, some of which are
| Feature | IoT Hub message routing | IoT Hub integration with Event Grid | | - | | - | | **Maximum message size** | 256 KB, device-to-cloud | 256 KB, device-to-cloud |
-| **Reliability** | High: Delivers each message to the endpoint at least once for each route. Expires all messages that are not delivered within one hour. | High: Delivers each message to the webhook at least once for each subscription. Expires all events that are not delivered within 24 hours. |
+| **Reliability** | High: Delivers each message to the endpoint at least once for each route. Expires all messages that aren't delivered within one hour. | High: Delivers each message to the webhook at least once for each subscription. Expires all events that aren't delivered within 24 hours. |
| **Scalability** | High: Optimized to support millions of simultaneously connected devices sending billions of messages. | High: Capable of routing 10,000,000 events per second per region. | | **Latency** | Low: Near-real time. | Low: Near-real time. |
-| **Send to multiple endpoints** | Yes, send a single message to multiple endpoints. | Yes, send a single message to multiple endpoints.
+| **Send to multiple endpoints** | Yes, send a single message to multiple endpoints. | Yes, send a single message to multiple endpoints. |
| **Security** | Iot Hub provides per-device identity and revocable access control. For more information, see the [IoT Hub access control](iot-hub-devguide-security.md). | Event Grid provides validation at three points: event subscriptions, event publishing, and webhook event delivery. For more information, see [Event Grid security and authentication](../event-grid/security-authentication.md). | ## How to choose
IoT Hub message routing and the IoT Hub integration with Event Grid perform diff
IoT Hub message routing maintains the order in which messages are sent, so that they arrive in the same way.
- Event Grid does not guarantee that endpoints will receive events in the same order that they occurred. For those cases in which absolute order of messages is significant and/or in which a consumer needs a trustworthy unique identifier for messages, we recommend using message routing.
+ Event Grid does not guarantee that endpoints receive events in the same order that they occurred. For those cases in which absolute order of messages is significant and/or in which a consumer needs a trustworthy unique identifier for messages, we recommend using message routing.
## Next steps
-* Learn more about [IoT Hub Message Routing](iot-hub-devguide-messages-d2c.md) and the [IoT Hub endpoints](iot-hub-devguide-endpoints.md).
-* Learn more about [Azure Event Grid](../event-grid/overview.md).
-* To learn how to create Message Routes, see the [Process IoT Hub device-to-cloud messages using routes](../iot-hub/tutorial-routing.md) tutorial.
+* Learn more about [IoT Hub message routing](../iot-hub/iot-hub-devguide-messages-d2c.md) and the [IoT Hub endpoints](../iot-hub/iot-hub-devguide-endpoints.md).
* Try out the Event Grid integration by [Sending email notifications about Azure IoT Hub events using Logic Apps](../event-grid/publish-iot-hub-events-to-logic-apps.md).
iot-hub Iot Hub How To Clone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-clone.md
This is the general method we recommend for moving an IoT hub from one region to
## How to handle message routing
-If your hub uses [custom routing](iot-hub-devguide-messages-read-custom.md), exporting the template for the hub includes the routing configuration, but it doesn't include the resources themselves. You must choose whether to move the routing resources to the new location or to leave them in place and continue to use them "as is".
+If your hub uses [message routing](iot-hub-devguide-messages-d2c.md), exporting the template for the hub includes the routing configuration, but it doesn't include the resources themselves. You must choose whether to move the routing resources to the new location or to leave them in place and continue to use them "as is".
For example, say you have a hub in West US that is routing messages to a storage account (also in West US), and you want to move the hub to East US. You can move the hub and have it still route messages to the storage account in West US, or you can move the hub and also move the storage account. There may be a small performance hit from routing messages to endpoint resources in a different region.
This section provides specific instructions for migrating the hub.
### Edit the template
-You have to make some changes before you can use the template to create the new hub in the new region. Use [VS Code](https://code.visualstudio.com) or a text editor to edit the template.
+You have to make some changes before you can use the template to create the new hub in the new region. Use [Visual Studio Code](https://code.visualstudio.com) or a text editor to edit the template.
#### Edit the hub name and location
The application targets .NET Core, so you can run it on either Windows or Linux.
1. To run the application, specify three connection strings and five options. You pass this data in as command-line arguments or use environment variables, or use a combination of the two. We're going to pass the options in as command line arguments, and the connection strings as environment variables.
- The reason for this is because the connection strings are long and ungainly, and unlikely to change, but you might want to change the options and run the application more than once. To change the value of an environment variable, you have to close the command window and Visual Studio or VS Code, whichever you are using.
+ The reason for this is because the connection strings are long and ungainly, and unlikely to change, but you might want to change the options and run the application more than once. To change the value of an environment variable, you have to close the command window and Visual Studio or Visual Studio Code, whichever you are using.
### Options
iot-hub Iot Hub Monitoring Notifications With Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md
Create a Service Bus namespace and queue. Later in this topic, you create a rout
## Add a custom endpoint and routing rule to your IoT hub
-Add a custom endpoint for the Service Bus queue to your IoT hub and create a message routing rule to direct messages that contain a temperature alert to that endpoint, where they will be picked up by your logic app. The routing rule uses a routing query, `temperatureAlert = "true"`, to forward messages based on the value of the `temperatureAlert` application property set by the client code running on the device. To learn more, see [Message routing query based on message properties](./iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-message-properties).
+Add a custom endpoint for the Service Bus queue to your IoT hub and create a message routing rule to direct messages that contain a temperature alert to that endpoint, where they will be picked up by your logic app. The routing rule uses a routing query, `temperatureAlert = "true"`, to forward messages based on the value of the `temperatureAlert` application property set by the client code running on the device. To learn more, see [Message routing query based on message properties](./iot-hub-devguide-routing-query-syntax.md#query-based-on-message-properties).
### Add a custom endpoint
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
If a device can't use the device SDKs, it can still connect to the public device
`SharedAccessSignature sig={signature-string}&se={expiry}&sr={URL-encoded-resourceURI}` > [!NOTE]
- > If you use X.509 certificate authentication, SAS token passwords are not required. For more information, see [Set up X.509 security in your Azure IoT Hub](./tutorial-x509-scripts.md) and follow code instructions in the [TLS/SSL configuration section](#tlsssl-configuration).
+ > If you use X.509 certificate authentication, SAS token passwords are not required. For more information, see [Set up X.509 security in your Azure IoT Hub](./tutorial-x509-prove-possession.md) and follow code instructions in the [TLS/SSL configuration section](#tlsssl-configuration).
For more information about how to generate SAS tokens, see the [Use SAS tokens as a device](iot-hub-dev-guide-sas.md#use-sas-tokens-as-a-device) section of [Control access to IoT Hub using Shared Access Signatures](iot-hub-dev-guide-sas.md).
iot-hub Iot Hub Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-scaling.md
The standard tier of IoT Hub enables all features, and is required for any IoT s
| - | - | - | | [Device-to-cloud telemetry](iot-hub-devguide-messaging.md) | Yes | Yes | | [Per-device identity](iot-hub-devguide-identity-registry.md) | Yes | Yes |
-| [Message routing](iot-hub-devguide-messages-read-custom.md), [message enrichments](iot-hub-message-enrichments-overview.md), and [Event Grid integration](iot-hub-event-grid.md) | Yes | Yes |
+| [Message routing](iot-hub-devguide-messages-d2c.md), [message enrichments](iot-hub-message-enrichments-overview.md), and [Event Grid integration](iot-hub-event-grid.md) | Yes | Yes |
| [HTTP, AMQP, and MQTT protocols](iot-hub-devguide-protocols.md) | Yes | Yes | | [Device Provisioning Service](../iot-dps/about-iot-dps.md) | Yes | Yes | | [Monitoring and diagnostics](monitor-iot-hub.md) | Yes | Yes |
iot-hub Iot Hub X509 Certificate Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509-certificate-concepts.md
To learn more about the fields that make up an X.509 certificate, see [X.509 cer
If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles:
-* [Tutorial: Use Microsoft-supplied scripts to create test certificates](tutorial-x509-scripts.md)
* [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md)
-* [Tutorial: Use OpenSSL to create self-signed certificates](tutorial-x509-self-sign.md)
+* If you want to use self-signed certificates for testing, see the [Create a self-signed certificate](reference-x509-certificates.md#create-a-self-signed-certificate) section of [X.509 certificates](reference-x509-certificates.md).
+
+ >[!IMPORTANT]
+ >We recommend that you use certificates signed by an issuing Certificate Authority (CA), even for testing purposes. Never use self-signed certificates in production.
If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Upload and verify a CA certificate to IoT Hub](tutorial-x509-prove-possession.md).
iot-hub Iot Hub X509ca Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509ca-overview.md
The upload process entails uploading a file that contains your certificate. Thi
The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. It does so by generating a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you will possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step by uploading a file containing the results.
-Learn how to [register your CA certificate](./tutorial-x509-scripts.md)
+Learn how to [register your CA certificate](./tutorial-x509-prove-possession.md)
## Create a device on IoT Hub To prevent device impersonation, IoT Hub requires that you let it know what devices to expect. You do this by creating a device entry in the IoT hub's device registry. This process is automated when using [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md).
-Learn how to [manually create a device in IoT Hub](./tutorial-x509-scripts.md).
+Learn how to [manually create a device in IoT Hub](./iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub).
## Authenticate devices signed with X.509 CA certificates
With your X.509 CA certificate registered and devices signed into a certificate
A successful device connection to IoT Hub completes the authentication process and is also indicative of a proper setup. Every time a device connects, IoT Hub renegotiates the TLS session and verifies the deviceΓÇÖs X.509 certificate.
-Learn how to [complete this device connection step](./tutorial-x509-scripts.md).
+Learn how to [complete this device connection step](./tutorial-x509-prove-possession.md).
## Next Steps
iot-hub Reference X509 Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/reference-x509-certificates.md
# X.509 certificates
-X.509 certificates are digital documents that represent a user, computer, service, or device. They're issued by a certification authority (CA), subordinate CA, or registration authority and contain the public key of the certificate subject. They don't contain the subject's private key, which must be stored securely. Public key certificates are documented by [RFC 5280](https://tools.ietf.org/html/rfc5280). They're digitally signed and, in general, contain the following information:
+X.509 certificates are digital documents that represent a user, computer, service, or device. A certificate authority (CA), subordinate CA, or registration authority issues X.509 certificates. The certificates contain the public key of the certificate subject. They don't contain the subject's private key, which must be stored securely. [RFC 5280](https://tools.ietf.org/html/rfc5280) documents public key certificates, including their fields and extensions. Public key certificates are digitally signed and typically contain the following information:
* Information about the certificate subject * The public key that corresponds to the subject's private key
The following table describes Version 1 certificate fields for X.509 certificate
| [Serial Number](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.2) | An integer that represents the unique number for each certificate issued by a certificate authority (CA). | | [Signature](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.3) | The identifier for the cryptographic algorithm used by the CA to sign the certificate. The value includes both the identifier of the algorithm and any optional parameters used by that algorithm, if applicable. | | [Issuer](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.4) | The distinguished name (DN) of the certificate's issuing CA. |
-| [Validity](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.5) | The inclusive time period for which the certificate is considered valid. |
+| [Validity](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.5) | The inclusive time period for which the certificate is valid. |
| [Subject](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.6) | The distinguished name (DN) of the certificate subject. | | [Subject Public Key Info](https://www.rfc-editor.org/rfc/rfc5280#section-4.1.2.7) | The public key owned by the certificate subject. |
Certificate extensions, introduced with Version 3, provide methods for associati
### Standard extensions
-The extensions included in this section are defined as part of the X.509 standard, for use in the Internet public key infrastructure (PKI).
+The X.509 standard defines the extensions included in this section, for use in the Internet public key infrastructure (PKI).
| Name | Description | | | |
Certificates can be saved in various formats. Azure IoT Hub authentication typic
| Binary certificate | A raw form binary certificate using Distinguished Encoding Rules (DER) ASN.1 encoding. | | ASCII PEM format | A PEM certificate (.pem) file contains a Base64-encoded certificate beginning with `--BEGIN CERTIFICATE--` and ending with `--END CERTIFICATE--`. One of the most common formats for X.509 certificates, PEM format is required by IoT Hub when uploading certain certificates, such as device certificates. | | ASCII PEM key | Contains a Base64-encoded DER key, optionally with more metadata about the algorithm used for password protection. |
-| PKCS #7 certificate | A format designed for the transport of signed or encrypted data. It can include the entire certificate chain. It's defined by [RFC 2315](https://tools.ietf.org/html/rfc2315). |
-| PKCS #8 key | The format for a private key store. It's defined by [RFC 5208](https://tools.ietf.org/html/rfc5208). |
-| PKCS #12 key and certificate | A complex format that can store and protect a key and the entire certificate chain. It's commonly used with a .p12 or .pfx extension. PKCS #12 is synonymous with the PFX format. It's defined by [RFC 7292](https://tools.ietf.org/html/rfc7292). |
+| PKCS #7 certificate | A format designed for the transport of signed or encrypted data. It can include the entire certificate chain. [RFC 2315](https://tools.ietf.org/html/rfc2315) defines this format. |
+| PKCS #8 key | The format for a private key store. [RFC 5208](https://tools.ietf.org/html/rfc5208) defines this format. |
+| PKCS #12 key and certificate | A complex format that can store and protect a key and the entire certificate chain. It's commonly used with a .p12 or .pfx extension. PKCS #12 is synonymous with the PFX format. [RFC 7292](https://tools.ietf.org/html/rfc7292) defines this format. |
+
+## Self-signed certificates
+
+You can authenticate a device to your IoT hub for testing purposes by using two self-signed certificates. This type of authentication is sometimes called *thumbprint authentication* because the certificates are identified by calculated hash values called *fingerprints* or *thumbprints*. These calculated hash values are used by IoT Hub to authenticate your devices.
+
+>[!IMPORTANT]
+>We recommend that you use certificates signed by an issuing Certificate Authority (CA), even for testing purposes. Never use self-signed certificates in production.
+
+### Create a self-signed certificate
+
+You can use [OpenSSL](https://www.openssl.org/) to create self-signed certificates. The following steps show you how to run OpenSSL commands in a bash shell to create a self-signed certificate and retrieve a certificate fingerprint that can be used for authenticating your device in IoT Hub.
+
+>[!NOTE]
+>If you want to use self-signed certificates for testing, you must create two certificates for each device.
+
+1. Run the following command to generate a private key and create a PEM-encoded private key (.key) file, replacing the following placeholders with their corresponding values. The private key generated by the following command uses the RSA algorithm with 2048-bit encryption.
+
+ *{KeyFile}*. The name of your private key file.
+
+ ```bash
+ openssl genpkey -out {KeyFile} -algorithm RSA -pkeyopt rsa_keygen_bits:2048
+ ```
+
+1. Run the following command to generate a PKCS #10 certificate signing request (CSR) and create a CSR (.csr) file, replacing the following placeholders with their corresponding values. Make sure that you specify the device ID of the IoT device for your self-signed certificate when prompted.
+
+ *{KeyFile}*. The name of your private key file.
+
+ *{CsrFile}*. The name of your CSR file.
+
+ *{DeviceID}*. The name of your IoT device.
+
+ ```bash
+ openssl req -new -key {KeyFile} -out {CsrFile}
+
+ Country Name (2 letter code) [XX]:.
+ State or Province Name (full name) []:.
+ Locality Name (eg, city) [Default City]:.
+ Organization Name (eg, company) [Default Company Ltd]:.
+ Organizational Unit Name (eg, section) []:.
+ Common Name (eg, your name or your server hostname) []:{DeviceID}
+ Email Address []:.
+
+ Please enter the following 'extra' attributes
+ to be sent with your certificate request
+ A challenge password []:.
+ An optional company name []:.
+ ```
+
+1. Run the following command to examine and verify your CSR, replacing the following placeholders with their corresponding values.
+
+ *{CsrFile}*. The name of your certificate file.
+
+ ```bash
+ openssl req -text -in {CsrFile} -verify -noout
+ ```
+
+1. Run the following command to generate a self-signed certificate and create a PEM-encoded certificate (.crt) file, replacing the following placeholders with their corresponding values. The command converts and signs your CSR with your private key, generating a self-signed certificate that expires in 365 days.
+
+ *{KeyFile}*. The name of your private key file.
+
+ *{CsrFile}*. The name of your CSR file.
+
+ *{CrtFile}*. The name of your certificate file.
+
+ ```bash
+ openssl x509 -req -days 365 -in {CsrFile} -signkey {KeyFile} -out {CrtFile}
+ ```
+
+1. Run the following command to retrieve the fingerprint of the certificate, replacing the following placeholders with their corresponding values. The fingerprint of a certificate is a calculated hash value that is unique to that certificate. You need the fingerprint to configure your IoT device in IoT Hub for testing.
+
+ *{CrtFile}*. The name of your certificate file.
+
+ ```bash
+ openssl x509 -in {CrtFile} -noout -fingerprint
+ ```
## For more information
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
For device developers, if the volume of errors is a concern, switch to the C SDK
In general, the error message presented should explain how to fix the error. If for some reason you don't have access to the error message detail, make sure: * The SAS or other security token you use isn't expired.
-* For X.509 certificate authentication, the device certificate or the CA certificate associated with the device isn't expired. To learn how to register X.509 CA certificates with IoT Hub, see [Set up X.509 security in your Azure IoT hub](tutorial-x509-scripts.md).
+* For X.509 certificate authentication, the device certificate or the CA certificate associated with the device isn't expired. To learn how to register X.509 CA certificates with IoT Hub, see [Set up X.509 security in your Azure IoT hub](tutorial-x509-prove-possession.md).
* For X.509 certificate thumbprint authentication, the thumbprint of the device certificate is registered with IoT Hub. * The authorization credential is well formed for the protocol that you use. To learn more, see [Control access to IoT Hub](iot-hub-devguide-security.md). * The authorization rule used has the permission for the operation requested.
iot-hub Tutorial X509 Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-introduction.md
Before starting any of the articles in this tutorial, you should be familiar wit
## X.509 certificate scenario paths
-Using a self-signed certificate to authenticate a device provides a quick and easy way to test IoT Hub features. Self-signed certificates shouldn't be used in production as they provide less security than a certificate chain anchored with a CA-signed certificate backed by a PKI. To learn more about creating and using a self-signed X.509 certificate to authenticate with IoT Hub, see [Tutorial: Use OpenSSL to create self-signed certificates](tutorial-x509-self-sign.md).
- Using a CA-signed certificate chain backed by a PKI to authenticate a device provides the best level of security for your devices: - In production, we recommend you get your X.509 CA certificates from a public root certificate authority. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. If you already have an X.509 CA certificate, and you know how to create and sign device certificates into a certificate chain, follow the instructions in [Tutorial: Upload and verify a CA certificate to IoT Hub](/tutorial-x509-prove-possession.md) to upload your CA certificate to your IoT hub. Then, follow the instructions in [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to authenticate a device with your IoT hub. - For testing purposes, we recommend using OpenSSL to create an X.509 certificate chain. OpenSSL is used widely across the industry to work with X.509 certificates. You can follow the steps in [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md) to create a root CA and intermediate CA certificate with which to create and sign device certificates. The tutorial also shows how to upload and verify a CA certificate. Then, follow the instructions in [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to authenticate a device with your IoT hub. -- Several of the Azure IoT SDKs provide convenience scripts to help you create test certificate chains. For instructions about how to create certificate chains in PowerShell or Bash using scripts provided in the Azure IoT C SDK, see [Tutorial: Use Microsoft-supplied scripts to create test certificates](tutorial-x509-scripts.md). The tutorial also shows how to upload and verify a CA certificate. Then follow the instructions in [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to authenticate a device with your IoT hub.- ## Next steps To learn more about the fields that make up an X.509 certificate, see [X.509 certificates](reference-x509-certificates.md). If you're already familiar with X.509 certificates, and you want to generate test versions that you can use to authenticate to your IoT hub, see the following articles:
-* [Tutorial: Use Microsoft-supplied scripts to create test certificates](tutorial-x509-scripts.md)
* [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md)
-* [Tutorial: Use OpenSSL to create self-signed certificates](tutorial-x509-self-sign.md)
+* If you want to use self-signed certificates for testing, see the [Create a self-signed certificate](reference-x509-certificates.md#create-a-self-signed-certificate) section of [X.509 certificates](reference-x509-certificates.md).
+
+ >[!IMPORTANT]
+ >We recommend that you use certificates signed by an issuing Certificate Authority (CA), even for testing purposes. Never use self-signed certificates in production.
If you have a root CA certificate or subordinate CA certificate and you want to upload it to your IoT hub, you must verify that you own that certificate. For more information, see [Tutorial: Upload and verify a CA certificate to IoT Hub](tutorial-x509-prove-possession.md).
iot-hub Tutorial X509 Openssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-openssl.md
# Tutorial: Use OpenSSL to create test certificates
-Although you can purchase X.509 certificates from a trusted certification authority, creating your own test certificate hierarchy or using self-signed certificates is adequate for testing IoT hub device authentication. The following example uses [OpenSSL](https://www.openssl.org/) and the [OpenSSL Cookbook](https://www.feistyduck.com/library/openssl-cookbook/online/ch-openssl.html) to create a certification authority (CA), a subordinate CA, and a device certificate. The example then signs the subordinate CA and the device certificate into a certificate hierarchy. This is presented for example purposes only.
+For production environments, we recommend that you purchase an X.509 CA certificate from a public root certificate authority (CA). However, creating your own test certificate hierarchy is adequate for testing IoT Hub device authentication. For more information about getting an X.509 CA certificate from a public root CA, see the [Get an X.509 CA certificate](iot-hub-x509ca-overview.md#get-an-x509-ca-certificate) section of [Authenticate devices using X.509 CA certificates](iot-hub-x509ca-overview.md).
+
+The following example uses [OpenSSL](https://www.openssl.org/) and the [OpenSSL Cookbook](https://www.feistyduck.com/library/openssl-cookbook/online/ch-openssl.html) to create a certificate authority (CA), a subordinate CA, and a device certificate. The example then signs the subordinate CA and the device certificate into a certificate hierarchy. This example is presented for demonstration purposes only.
+
+>[!NOTE]
+>Microsoft provides PowerShell and Bash scripts to help you understand how to create your own X.509 certificates and authenticate them to an IoT hub. The scripts are included with the [Azure IoT Hub Device SDK for C](https://github.com/Azure/azure-iot-sdk-c). The scripts are provided for demonstration purposes only. Certificates created by them must not be used for production. The certificates contain hard-coded passwords (ΓÇ£1234ΓÇ¥) and expire after 30 days. You must use your own best practices for certificate creation and lifetime management in a production environment. For more information, see [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/main/tools/CACertificates/CACertificateOverview.md) in the GitHub repository for the [Azure IoT Hub Device SDK for C](https://github.com/Azure/azure-iot-sdk-c).
## Step 1 - Create the root CA directory structure
-Create a directory structure for the certification authority.
+Create a directory structure for the certificate authority.
* The *certs* directory stores new certificates.
-* The *db* directory is used for the certificate database.
+* The *db* directory stores the certificate database.
* The *private* directory stores the CA private key. ```bash
First, generate a private key and the certificate signing request (CSR) in the *
openssl req -new -config rootca.conf -out rootca.csr -keyout private/rootca.key ```
-Next, create a self-signed CA certificate. Self-signing is suitable for testing purposes. Specify the `ca_ext` configuration file extensions on the command line. These indicate that the certificate is for a root CA and can be used to sign certificates and certificate revocation lists (CRLs). Sign the certificate, and commit it to the database.
+Next, create a self-signed CA certificate. Self-signing is suitable for testing purposes. Specify the `ca_ext` configuration file extensions on the command line. These extensions indicate that the certificate is for a root CA and can be used to sign certificates and certificate revocation lists (CRLs). Sign the certificate, and commit it to the database.
```bash openssl ca -selfsign -config rootca.conf -in rootca.csr -out rootca.crt -extensions ca_ext
subjectKeyIdentifier = hash
## Step 6 - Create a subordinate CA
-This example shows you how to create a subordinate or registration CA. Because you can use the root CA to sign certificates, creating a subordinate CA isnΓÇÖt strictly necessary. Having a subordinate CA does, however, mimic real world certificate hierarchies in which the root CA is kept offline and subordinate CAs issue client certificates.
+This example shows you how to create a subordinate or registration CA. Because you can use the root CA to sign certificates, creating a subordinate CA isnΓÇÖt strictly necessary. Having a subordinate CA does, however, mimic real world certificate hierarchies in which the root CA is kept offline and a subordinate CA issues client certificates.
From the *subca* directory, use the configuration file to generate a private key and a certificate signing request (CSR).
iot-hub Tutorial X509 Prove Possession https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-prove-possession.md
If you didn't choose to automatically verify your certificate during upload, you
5. There are three ways to generate a verification certificate:
- * If you're using the PowerShell script supplied by Microsoft, run `New-CACertsVerificationCert "<verification code>"` to create a certificate named `VerifyCert4.cer`, replacing `<verification code>` with the previously generated verification code. For more information, see [Tutorial: Use Microsoft-supplied scripts to create test certificates](tutorial-x509-scripts.md).
+ * If you're using the PowerShell script supplied by Microsoft, run `New-CACertsVerificationCert "<verification code>"` to create a certificate named `VerifyCert4.cer`, replacing `<verification code>` with the previously generated verification code. For more information, see [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md).
- * If you're using the Bash script supplied by Microsoft, run `./certGen.sh create_verification_certificate "<verification code>"` to create a certificate named `verification-code.cert.pem`, replacing `<verification code>` with the previously generated verification code. For more information, see [Tutorial: Use Microsoft-supplied scripts to create test certificates](tutorial-x509-scripts.md).
+ * If you're using the Bash script supplied by Microsoft, run `./certGen.sh create_verification_certificate "<verification code>"` to create a certificate named `verification-code.cert.pem`, replacing `<verification code>` with the previously generated verification code. For more information, see [Tutorial: Use OpenSSL to create test certificates](tutorial-x509-openssl.md).
* If you're using OpenSSL to generate your certificates, you must first generate a private key, then generate a certificate signing request (CSR) file. In the following example, replace `<verification code>` with the previously generated verification code:
iot-hub Tutorial X509 Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-scripts.md
- Title: Tutorial - Use Microsoft scripts to create x.509 test certificates for Azure IoT Hub | Microsoft Docs
-description: Tutorial - Use custom scripts to create CA and device certificates for Azure IoT Hub
----- Previously updated : 06/26/2021--
-#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to introduce me to Microsoft scripts that I can use to generate test certificates.
--
-# Tutorial: Use Microsoft-supplied scripts to create test certificates
-
-Microsoft provides PowerShell and Bash scripts to help you understand how to create your own X.509 certificates and authenticate them to an IoT Hub. The scripts are located in a GitHub [repository](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates). They are provided for demonstration purposes only. Certificates created by them must not be used for production. The certificates contain hard-coded passwords (ΓÇ£1234ΓÇ¥) and expire after 30 days. For a production environment, you'll need to use your own best practices for certificate creation and lifetime management.
-
-## PowerShell scripts
-
-### Step 1 - Setup
-
-Download [OpenSSL for Windows](https://www.openssl.org/docs/faq.html#MISC4) or [build it from source](https://www.openssl.org/source/). Then run the preliminary scripts:
-
-1. Copy the scripts from this GitHub [repository](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) into the local directory in which you want to work. All files will be created as children of this directory.
-
-1. Start PowerShell as an administrator.
-
-1. Change to the directory where you loaded the scripts.
-
-1. On the command line, set the environment variable `$ENV:OPENSSL_CONF` to the directory in which the openssl configuration file (openssl.cnf) is located.
-
-1. Run `Set-ExecutionPolicy -ExecutionPolicy Unrestricted` so that PowerShell can run the scripts.
-
-1. Run `. .\ca-certs.ps1`. This brings the functions of the script into the PowerShell global namespace.
-
-1. Run `Test-CACertsPrerequisites`. PowerShell uses the Windows Certificate Store to manage certificates. This command verifies that there won't be name collisions later with existing certificates and that OpenSSL is setup correctly.
-
-### Step 2 - Create certificates
-
-Run `New-CACertsCertChain [ecc|rsa]`. ECC is recommended for CA certificates but not required. This script updates your directory and Windows Certificate store with the following CA and intermediate certificates:
-
-* intermediate1.pem
-* intermediate2.pem
-* intermediate3.pem
-* RootCA.cer
-* RootCA.pem
-
-After running the script, add the new CA certificate (RootCA.pem) to your IoT hub:
-
-1. Go to your IoT hub and navigate to Certificates.
-
-1. Select **Add**.
-
-1. Enter a display name for the CA certificate.
-
-1. To skip proof of possession, check the box next to **Set certificate status to verified on upload**.
-
-1. Upload the CA certificate.
-
-1. Select **Save**.
-
-### (Optional) Step 3 - Prove possession
-
-If you didn't choose to automatically verify the certificate during upload, you manually prove possession:
-
-1. Select the new CA certificate.
-
-1. Select **Generate Verification Code** in the **Certificate Details** dialog. For more information, see [Tutorial: Upload and verify a CA certificate to IoT Hub](tutorial-x509-prove-possession.md).
-
-1. Create a certificate that contains the verification code. For example, if the verification code is `"106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288"`, run the following to create a new certificate in your working directory containing the subject `CN = 106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288`. The script creates a certificate named `VerifyCert4.cer`.
-
- `New-CACertsVerificationCert "106A5SD242AF512B3498BD609C4941E66R34H268DDB3288"`
-
-1. Upload `VerifyCert4.cer` to your IoT hub in the **Certificate Details** dialog.
-
-1. Select **Verify**.
-
-### Step 4 - Create a new device
-
-Create a device for your IoT hub:
-
-1. In your IoT hub, navigate to the **IoT Devices** section.
-
-1. Add a new device with ID `mydevice`.
-
-1. For authentication, choose **X.509 CA Signed**.
-
-1. Run `New-CACertsDevice mydevice` to create a new device certificate. This creates the following files in your working directory:
-
- * `mydevice.pfx`
- * `mydevice-all.pem`
- * `mydevice-private.pem`
- * `mydevice-public.pem`
-
-### Step 5 - Test your device certificate
-
-Go to [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to determine if your device certificate can authenticate to your IoT hub. You will need the PFX version of your certificate, `mydevice.pfx`.
-
-### Step 6 - Cleanup
-
-From the start menu, open **Manage Computer Certificates** and navigate to **Certificates - Local Computer > personal**. Remove certificates issued by "Azure IoT CA TestOnly*". Similarly remove the appropriate certificates from **>Trusted Root Certification Authority > Certificates and >Intermediate Certificate Authorities > Certificates**.
-
-## Bash Scripts
-
-### Step 1 - Setup
-
-1. Start Bash.
-
-1. Change to the directory in which you want to work. All files will be created in this directory.
-
-1. Copy `*.cnf` and `*.sh` to your working directory.
-
-### Step 2 - Create certificates
-
-1. Run `./certGen.sh create_root_and_intermediate`. This creates the following files in the **certs** directory:
-
- * azure-iot-test-only.chain.ca.cert.pem
- * azure-iot-test-only.intermediate.cert.pem
- * azure-iot-test-only.root.ca.cert.pem
-
-1. Go to your IoT hub and navigate to **Certificates**.
-
-1. Select **Add**.
-
-1. Enter a display name for the CA certificate.
-
-1. Upload only the CA certificate to your IoT hub. The name of the certificate is `./certs/azure-iot-test-only.root.ca.cert.pem.`
-
-1. Select **Save**.
-
-### Step 3 - Prove possession
-
-1. Select the new CA certificate created in the preceding step.
-
-1. Select **Generate Verification Code** in the **Certificate Details** dialog. For more information, see [Tutorial: Upload and verify a CA certificate to IoT Hub](tutorial-x509-prove-possession.md).
-
-1. Create a certificate that contains the verification code. For example, if the verification code is `"106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288"`, run the following to create a new certificate in your working directory named `verification-code.cert.pem` which contains the subject `CN = 106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288`.
-
- `./certGen.sh create_verification_certificate "106A5SD242AF512B3498BD6098C4941E66R34H268DDB3288"`
-
-1. Upload the certificate to your IoT hub in the **Certificate Details** dialog.
-
-1. Select **Verify**.
-
-### Step 4 - Create a new device
-
-Create a device for your IoT hub:
-
-1. In your IoT hub, navigate to the IoT Devices section.
-
-1. Add a new device with ID `mydevice`.
-
-1. For authentication, choose **X.509 CA Signed**.
-
-1. Run `./certGen.sh create_device_certificate mydevice` to create a new device certificate. This creates two files named `new-device.cert.pem` and `new-device.cert.pfx` files in your working directory.
-
-### Step 5 - Test your device certificate
-
-Go to [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to determine if your device certificate can authenticate to your IoT hub. You will need the PFX version of your certificate, `new-device.cert.pfx`.
-
-### Step 6 - Cleanup
-
-Because the bash script simply creates certificates in your working directory, just delete them when you are done testing.
-
-## Next Steps
-
-To test your certificate, go to [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to determine if your certificate can authenticate your device to your IoT hub.
iot-hub Tutorial X509 Self Sign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-self-sign.md
- Title: Tutorial - Use OpenSSL to create self signed certificates for Azure IoT Hub | Microsoft Docs
-description: Tutorial - Use OpenSSL to create self-signed X.509 certificates for Azure IoT Hub
----- Previously updated : 12/30/2022--
-#Customer intent: As a developer, I want to be able to use X.509 certificates to authenticate devices to an IoT hub. This step of the tutorial needs to show me how to use OpenSSL to self-sign device certificates.
--
-# Tutorial: Use OpenSSL to create self-signed certificates
-
-You can authenticate a device to your IoT hub using two self-signed device certificates. This type of authentication is sometimes called *thumbprint authentication* because the certificates contain thumbprints (hash values) that you submit to the IoT hub. The following steps show you how to create two self-signed certificates. This type of certificate is typically used for testing.
-
-## Step 1 - Create a key for the first certificate
--
-```bash
-openssl genpkey -out device1.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
-```
-
-## Step 2 - Create a CSR for the first certificate
-
-Make sure that you specify the device ID when prompted.
-
-```bash
-openssl req -new -key device1.key -out device1.csr
-
-Country Name (2 letter code) [XX]:.
-State or Province Name (full name) []:.
-Locality Name (eg, city) [Default City]:.
-Organization Name (eg, company) [Default Company Ltd]:.
-Organizational Unit Name (eg, section) []:.
-Common Name (eg, your name or your server hostname) []:{your-device-id}
-Email Address []:.
-
-Please enter the following 'extra' attributes
-to be sent with your certificate request
-A challenge password []:.
-An optional company name []:.
-```
-
-## Step 3 - Check the CSR
-
-```bash
-openssl req -text -in device1.csr -noout
-```
-
-## Step 4 - Self-sign certificate 1
-
-```bash
-openssl x509 -req -days 365 -in device1.csr -signkey device1.key -out device1.crt
-```
-
-## Step 5 - Create a key for the second certificate
-
-```bash
-openssl genpkey -out device2.key -algorithm RSA -pkeyopt rsa_keygen_bits:2048
-```
-
-## Step 6 - Create a CSR for the second certificate
-
-When prompted, specify the same device ID that you used for certificate 1.
-
-```bash
-openssl req -new -key device2.key -out device2.csr
-
-Country Name (2 letter code) [XX]:.
-State or Province Name (full name) []:.
-Locality Name (eg, city) [Default City]:.
-Organization Name (eg, company) [Default Company Ltd]:.
-Organizational Unit Name (eg, section) []:.
-Common Name (eg, your name or your server hostname) []:{your-device-id}
-Email Address []:.
-
-Please enter the following 'extra' attributes
-to be sent with your certificate request
-A challenge password []:.
-An optional company name []:.
-```
-
-## Step 7 - Self-sign certificate 2
-
-```bash
-openssl x509 -req -days 365 -in device2.csr -signkey device2.key -out device2.crt
-```
-
-## Step 8 - Retrieve the thumbprint for certificate 1
-
-```bash
-openssl x509 -in device1.crt -noout -fingerprint
-```
-
-## Step 9 - Retrieve the thumbprint for certificate 2
-
-```bash
-openssl x509 -in device2.crt -noout -fingerprint
-```
-
-## Step 10 - Create a new IoT device
-
-Navigate to your IoT hub in the Azure portal and create a new IoT device identity with the following characteristics:
-
-* Provide the **Device ID** that matches the subject name of your two certificates.
-* Select the **X.509 Self-Signed** authentication type.
-* Paste the hex string thumbprints that you copied from your device primary and secondary certificates. Make sure that the hex strings have no colon delimiters.
--
-## Next Steps
-
-Go to [Tutorial: Test certificate authentication](tutorial-x509-test-certificate.md) to determine if your certificate can authenticate your device to your IoT hub. The code on that page requires that you use a PFX certificate. Use the following OpenSSL command to convert your device .crt certificate to .pfx format.
-
-```bash
-openssl pkcs12 -export -in device.crt -inkey device.key -out device.pfx
-```
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
Cross-region load balancer is a Layer-4 pass-through network load balancer. This
### Floating IP Floating IP can be configured at both the global IP level and regional IP level. For more information, visit [Multiple frontends for Azure Load Balancer](./load-balancer-multivip-overview.md)
+### Health Probes
+Azure cross-region Load Balancer utilizes the health of the backend regional load balancers when deciding where to distribute traffic to. These health checks by the cross-region load balancer are done automatically every 20 seconds, given that a user has set up health probes on their regional load balancer.  
+ ## Build cross region solution on existing Azure Load Balancer The backend pool of cross-region load balancer contains one or more regional load balancers.
Cross-region load balancer routes the traffic to the appropriate regional load b
* UDP traffic isn't supported on Cross-region Load Balancer.
-* A health probe can't be configured currently. A default health probe automatically collects availability information about the regional load balancer every 20 seconds.
-
machine-learning How To Batch Scoring Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-batch-scoring-script.md
The scoring script must contain two methods:
#### The `init` method
-Use the `init()` method for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. You model's files will be available in an environment variable called `AZUREML_MODEL_DIR`. Use this variable to locate the files associated with the model.
+Use the `init()` method for any costly or common preparation. For example, use it to load the model into a global object. This function will be called once at the beginning of the process. You model's files will be available in an environment variable called `AZUREML_MODEL_DIR`. Use this variable to locate the files associated with the model. Notice that some models may be contained in a folder (in the following example, the model has several files in a folder named `model`). See [how you can find out what's the folder used by your model](#using-models-that-are-folders).
```python def init():
Notice that in this example we are placing the model in a global variable `model
Use the `run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]` method to perform the scoring of each mini-batch generated by the batch deployment. Such method will be called once per each `mini_batch` generated for your input data. Batch deployments read data in batches accordingly to how the deployment is configured. ```python
-def run(mini_batch):
+def run(mini_batch: List[str]) -> Union[List[Any], pandas.DataFrame]:
results = [] for file in mini_batch:
When writing scoring scripts that work with big amounts of data, you need to tak
Batch deployments distribute work at the file level, which means that a folder containing 100 files with mini-batches of 10 files will generate 10 batches of 10 files each. Notice that this will happen regardless of the size of the files involved. If your files are too big to be processed in large mini-batches we suggest to either split the files in smaller files to achieve a higher level of parallelism or to decrease the number of files per mini-batch. At this moment, batch deployment can't account for skews in the file's size distribution.
+### Relationship between the degree of parallelism and the scoring script
+
+Your deployment configuration controls the size of each mini-batch and the number of workers on each node. Take into account them when deciding if you want to read the entire mini-batch to perform inference, or if you want to run inference file by file, or row by row (for tabular). See [Running inference at the mini-batch, file or the row level](#running-inference-at-the-mini-batch-file-or-the-row-level) to see the different approaches.
+
+When running multiple workers on the same instance, take into account that memory will be shared across all the workers. Usually, increasing the number of workers per node should be accompanied by a decrease in the mini-batch size or by a change in the scoring strategy (if data size and compute SKU remains the same).
+ ### Running inference at the mini-batch, file or the row level Batch endpoints will call the `run()` function in your scoring script once per mini-batch. However, you will have the power to decide if you want to run the inference over the entire batch, over one file at a time, or over one row at a time (if your data happens to be tabular).
You will typically want to run inference over the batch all at once when you wan
> [!WARNING] > Running inference at the batch level may require having high control over the input data size to be able to correctly account for the memory requirements and avoid out of memory exceptions. Whether you are able or not of loading the entire mini-batch in memory will depend on the size of the mini-batch, the size of the instances in the cluster, the number of workers on each node, and the size of the mini-batch.
-For an example about how to achieve it see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments).
+For an example about how to achieve it see [High throughput deployments](how-to-image-processing-batch.md#high-throughput-deployments). This example processes an entire batch of files at a time.
#### File level
One of the easiest ways to perform inference is by iterating over all the files
> [!TIP] > If file sizes are too big to be readed even at once, please consider breaking down files into multiple smaller files to account for better parallelization.
-For an example about how to achieve it see [Image processing with batch deployments](how-to-image-processing-batch.md).
+For an example about how to achieve it see [Image processing with batch deployments](how-to-image-processing-batch.md). This example processes a file at a time.
#### Row level (tabular) For models that present challenges in the size of their inputs, you may want to consider running inference at the row level. Your batch deployment will still provide your scoring script with a mini-batch of files, however, you will read one file, one row at a time. This may look inefficient but for some deep learning models may be the only way to perform inference without scaling up your hardware requirements.
-For an example about how to achieve it see [Text processing with batch deployments](how-to-nlp-processing-batch.md).
+For an example about how to achieve it see [Text processing with batch deployments](how-to-nlp-processing-batch.md). This example processes a row at a time.
-### Relationship between the degree of parallelism and the scoring script
+### Using models that are folders
+
+When authoring scoring scripts, the environment variable `AZUREML_MODEL_DIR` is typically used in the `init()` function to load the model. However, some models may contain its files inside of a folder. When reading the files in this variable, you may need to account for that. You can identify the folder where your MLflow model is placed as follows:
+
+1. Go to [Azure Machine Learning portal](https://ml.azure.com).
+
+1. Go to the section __Models__.
+
+1. Select the model you are trying to deploy and click on the tab __Artifacts__.
+
+1. Take note of the folder that is displayed. This folder was indicated when the model was registered.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" alt-text="Screenshot showing the folder where the model artifacts are placed.":::
-Your deployment configuration controls the size of each mini-batch and the number of workers on each node. Take into account them when deciding if you want to read the entire mini-batch to perform inference. When running multiple workers on the same instance, take into account that memory will be shared across all the workers. Usually, increasing the number of workers per node should be accompanied by a decrease in the mini-batch size or by a change in the scoring strategy (if data size remains the same).
+Then you can use this path to load the model:
+
+```python
+def init():
+ global model
+
+ # AZUREML_MODEL_DIR is an environment variable created during deployment
+ # The path "model" is the name of the registered model's folder
+ model_path = os.path.join(os.environ["AZUREML_MODEL_DIR"], "model")
+
+ model = load_model(model_path)
+```
## Next steps
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
Last updated 10/10/2022-+
When deploying a machine learning model to a batch endpoint, you can secure their communication using private networks. This article explains the requirements to use batch endpoint in an environment secured by private networks.
-## Prerequisites
+## Securing batch endpoints
-* A secure Azure Machine Learning workspace. For more details about how to achieve it read [Create a secure workspace](tutorial-create-secure-workspace.md).
-* For Azure Container Registry in private networks, please note that there are [some prerequisites about their configuration](how-to-secure-workspace-vnet.md#prerequisites).
+Batch endpoints inherent the networking configuration from the workspace where they are deployed. All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. In order to have fully operational batch endpoints working with private networking, follow the following steps:
- > [!WARNING]
- > Azure Container Registries with Quarantine feature enabled are not supported by the moment.
+1. You have configured your Azure Machine Learning workspace for private networking. For more details about how to achieve it read [Create a secure workspace](tutorial-create-secure-workspace.md).
-* Ensure blob, file, queue, and table private endpoints are configured for the storage accounts as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). Batch deployments require all the 4 to properly work.
+2. For Azure Container Registry in private networks, there are [some prerequisites about their configuration](how-to-secure-workspace-vnet.md#prerequisites).
-## Securing batch endpoints
+ > [!WARNING]
+ > Azure Container Registries with Quarantine feature enabled are not supported by the moment.
-All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. No further configuration is required.
+3. Ensure blob, file, queue, and table private endpoints are configured for the storage accounts as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). Batch deployments require all the 4 to properly work.
-> [!IMPORTANT]
-> When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-job).
+4. Create the batch endpoint as regularly done.
The following diagram shows how the networking looks like for batch endpoints when deployed in a private workspace: :::image type="content" source="./media/how-to-secure-batch-endpoint/batch-vnet-peering.png" alt-text="Diagram that shows the high level architecture of a secure Azure Machine Learning workspace deployment.":::
-In order to enable the jump host VM (or self-hosted agent VMs if using [Azure Bastion](../bastion/bastion-overview.md)) access to the resources in Azure Machine Learning VNET, the previous architecture uses virtual network peering to seamlessly connect these two virtual networks. Thus the two virtual networks appear as one for connectivity purposes. The traffic between VMs and Azure Machine Learning resources in peered virtual networks uses the Microsoft backbone infrastructure. Like traffic between them in the same network, traffic is routed through Microsoft's private network only.
## Securing batch deployment jobs
The following diagram shows the high level design:
Have the following considerations when using such architecture:
-* Put the second set of private endpoints in a different resource group and hence in different private DNS zones. This prevents a name resolution conflict between the set of IPs used for the workspace and the ones used by the client VNets. Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today. Please note that the DNS resolution against a private DNS zone works only from virtual networks that are linked to it. For more details see [recommended zone names for Azure services](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
+* Put the second set of private endpoints in a different resource group and hence in different private DNS zones. It prevents a name resolution conflict between the set of IPs used for the workspace and the ones used by the client VNets. Azure Private DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without the need to add a custom DNS solution. By using private DNS zones, you can use your own custom domain names rather than the Azure-provided names available today. Note that the DNS resolution against a private DNS zone works only from virtual networks that are linked to it. For more details, see [recommended zone names for Azure services](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration).
* For your storage accounts, add 4 private endpoints in each VNet for blob, file, queue, and table as explained at [Secure Azure storage accounts](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
+## Limitations
+
+Consider the following limitations when working on batch endpoints deployed regarding networking:
+
+- If you change the networking configuration of the workspace from public to private, or from private to public, such doesn't affect existing batch endpoints networking configuration. Batch endpoints rely on the configuration of the workspace at the time of creation. You can recreate your endpoints if you want them to reflect changes you made in the workspace.
+
+- When working on a private link-enabled workspace, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Run batch endpoint to start a batch scoring job](how-to-use-batch-endpoint.md#run-endpoint-and-configure-inputs-and-outputs).
## Recommended read
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
The following table lists the supported configurations when configuring inbound
| public inbound with public outbound | `public_network_access` is enabled</br>The workspace must also allow public access. | `egress_public_network_access` is enabled | Yes | > [!IMPORTANT]
-> Outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access).
+> - Outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access).
+> - When `egress_public_network_access` is disabled, the deployment can only access the resources secured in the VNET. When `egress_public_network_access` is enabled, the deployment can only access the resources with public access, which means it cannot access the resources secured in the VNET.
## End-to-end example
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
az ml compute create --name myci --resource-group rg --workspace-name ws --vnet-
> The following code snippet assumes that `ml_client` points to an Azure Machine Learning workspace that uses a private endpoint to participate in a VNet. For more information on using `ml_client`, see the tutorial [Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md). ```python
-from azure.ai.ml.entities import AmlCompute
-
-# specify aml compute name.
-cpu_compute_target = "cpu-cluster"
-
-try:
- ml_client.compute.get(cpu_compute_target)
-except Exception:
- print("Creating a new cpu compute target...")
- compute = AmlCompute(
- name=cpu_compute_target, size="STANDARD_D2_V2", min_instances=0, max_instances=4,
- vnet_name="yourvnet", subnet_name="yoursubnet", enable_node_public_ip=False
- )
- ml_client.compute.begin_create_or_update(compute).result()
+from azure.ai.ml.entities import AmlCompute, NetworkSettings
+
+network_settings = NetworkSettings(vnet_name="<vnet-name>", subnet="<subnet-name>")
+compute = AmlCompute(
+ name=cpu_compute_target,
+ size="STANDARD_D2_V2",
+ min_instances=0,
+ max_instances=4,
+ enable_node_public_ip=False,
+ network_settings=network_settings
+)
+ml_client.begin_create_or_update(entity=compute)
``` # [Studio](#tab/azure-studio)
az ml compute create --name myci --resource-group rg --workspace-name ws --vnet-
> The following code snippet assumes that `ml_client` points to an Azure Machine Learning workspace that uses a private endpoint to participate in a VNet. For more information on using `ml_client`, see the tutorial [Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md). ```python
-from azure.ai.ml.entities import AmlCompute
-
-# specify aml compute name.
-cpu_compute_target = "cpu-cluster"
-
-try:
- ml_client.compute.get(cpu_compute_target)
-except Exception:
- print("Creating a new cpu compute target...")
- # Replace "yourvnet" and "yoursubnet" with your VNet and subnet.
- compute = AmlCompute(
- name=cpu_compute_target, size="STANDARD_D2_V2", min_instances=0, max_instances=4,
- vnet_name="yourvnet", subnet_name="yoursubnet"
- )
- ml_client.compute.begin_create_or_update(compute).result()
+from azure.ai.ml.entities import AmlCompute, NetworkSettings
+
+network_settings = NetworkSettings(vnet_name="<vnet-name>", subnet="<subnet-name>")
+compute = AmlCompute(
+ name=cpu_compute_target,
+ size="STANDARD_D2_V2",
+ min_instances=0,
+ max_instances=4,
+ network_settings=network_settings
+)
+ml_client.begin_create_or_update(entity=compute)
``` # [Studio](#tab/azure-studio)
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
pip install --ignore-installed [package]
Try creating a separate environment using conda
+## *Make issues*
+### No targets specified and no makefile found
+<!--issueDescription-->
+This issue can happen when no targets are specified and no makefile is found when running `make`.
+
+**Potential causes:**
+* Makefile doesn't exist in the current directory
+* No targets are specified
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+
+**Troubleshooting steps**
+* Ensure that the makefile is spelled correctly
+* Ensure that the makefile exists in the current directory
+* If you have a custom makefile, specify it using ```make -f custommakefile```
+* Specify targets in the makefile or in the command line
+* Configure your build and generate a makefile
+* Ensure your makefile is formatted correctly and tabs are used for indentation
+
+**Resources**
+* [GNU Make](https://www.gnu.org/software/make/manual/make.html)
+<!--/issueDescription-->
+ ## *Docker push issues* ### Failed to store Docker image <!--issueDescription-->
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
Open the [Azure ML studio portal](https://ml.azure.com) and sign in using your c
Batch endpoints run on compute clusters. They support both [Azure Machine Learning Compute clusters (AmlCompute)](./how-to-create-attach-compute-cluster.md) or [Kubernetes clusters](./how-to-attach-kubernetes-anywhere.md). Clusters are a shared resource so one cluster can host one or many batch deployments (along with other workloads if desired).
-Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`.
+This article uses a compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>` or create one as shown.
# [Azure CLI](#tab/azure-cli)
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
| | -- | | `name` | The name of the batch endpoint. Needs to be unique at the Azure region level.| | `description` | The description of the batch endpoint. This property is optional. |
- | `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. |
| `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. | # [Studio](#tab/azure-studio)
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
*You'll create the endpoint in the same step you are creating the deployment later.*
-## Create a scoring script
-
-Batch deployments require a scoring script that indicates how the given model should be executed and how input data must be processed.
-
-> [!NOTE]
-> For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
-
-> [!WARNING]
-> If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
-
-In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
-
-__mnist/code/batch_driver.py__
-- ## Create a batch deployment A deployment is a set of resources required for hosting the model that does the actual inferencing. To create a batch deployment, you need all the following items:
A deployment is a set of resources required for hosting the model that does the
* The environment in which the model runs. * The pre-created compute and resource settings.
+1. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch Endpoints support scripts created in Python. In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
+
+ > [!NOTE]
+ > For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+ > [!WARNING]
+ > If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
+
+ __mnist/code/batch_driver.py__
+
+ :::code language="python" source="~/azureml-examples-main/sdk/python/endpoints/batch/mnist/code/batch_driver.py" :::
+ 1. Create an environment where your batch deployment will run. Such environment needs to include the packages `azureml-core` and `azureml-dataset-runtime[fuse]`, which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yml`: __mnist/environment/conda.yml__
A deployment is a set of resources required for hosting the model that does the
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
-## Invoke the batch endpoint to start a batch job
+## Run endpoint and configure inputs and outputs
Invoking a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for some time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
machine-learning How To Use Mlflow Configure Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-configure-tracking.md
However, if you are working outside of Azure Machine Learning (like your local m
## Prerequisites
-You will need the following prerequisites to follow this tutorial:
+You need the following prerequisites to follow this tutorial:
[!INCLUDE [mlflow-prereqs](../../includes/machine-learning-mlflow-prereqs.md)] ## Configure MLflow tracking URI
-To connect MLflow to an Azure Machine Learning workspace you will need the tracking URI for the workspace. Each workspace has its own tracking URI and it has the protocol `azureml://`.
+To connect MLflow to an Azure Machine Learning workspace, you need the tracking URI for the workspace. Each workspace has its own tracking URI and it has the protocol `azureml://`.
## Configure authentication
Once the tracking is set, you'll also need to configure how the authentication n
The Azure Machine Learning plugin for MLflow supports several authentication mechanisms through the package `azure-identity`, which is installed as a dependency for the plugin `azureml-mlflow`. The following authentication methods are tried one by one until one of them succeeds:
-1. __Environment__: it will read account information specified via environment variables and use it to authenticate.
-1. __Managed Identity__: If the application is deployed to an Azure host with Managed Identity enabled, it will authenticate with it.
-1. __Azure CLI__: if a user has signed in via the Azure CLI `az login` command, it will authenticate as that user.
-1. __Azure PowerShell__: if a user has signed in via Azure PowerShell's `Connect-AzAccount` command, it will authenticate as that user.
-1. __Interactive browser__: it will interactively authenticate a user via the default browser.
+1. __Environment__: it reads account information specified via environment variables and use it to authenticate.
+1. __Managed Identity__: If the application is deployed to an Azure host with Managed Identity enabled, it authenticates with it.
+1. __Azure CLI__: if a user has signed in via the Azure CLI `az login` command, it authenticates as that user.
+1. __Azure PowerShell__: if a user has signed in via Azure PowerShell's `Connect-AzAccount` command, it authenticates as that user.
+1. __Interactive browser__: it interactively authenticates a user via the default browser.
If you'd rather use a certificate instead of a secret, you can configure the environment variables `AZURE_CLIENT_CERTIFICATE_PATH` to the path to a `PEM` or `PKCS12` certificate file (including private key) and `AZURE_CLIENT_CERTIFICATE_PASSWORD` with the password of the certificate file, if any.
export MLFLOW_EXPERIMENT_NAME="experiment_with_mlflow"
+## Non-public Azure Clouds support
+
+The Azure Machine Learning plugin for MLflow is configured by default to work with the global Azure cloud. However, you can configure the Azure cloud you are using by setting the environment variable `AZUREML_CURRENT_CLOUD`.
+
+# [MLflow SDK](#tab/mlflow)
+
+```Python
+import os
+
+os.environ["AZUREML_CURRENT_CLOUD"] = "AzureChinaCloud"
+```
+
+# [Using environment variables](#tab/environ)
+
+```bash
+export AZUREML_CURRENT_CLOUD="AzureChinaCloud"
+```
+++
+You can identify the cloud you are using with the following Azure CLI command:
+
+```bash
+az cloud list
+```
+
+The current cloud has the value `IsActive` set to `True`.
+ ## Next steps Now that your environment is connected to your workspace in Azure Machine Learning, you can start to work with it.
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
Previously updated : 07/07/2022 Last updated : 02/16/2023 # Quickstart: Use an ARM template to create an Azure Database for MySQL - Flexible Server
Last updated 07/07/2022
## Create server with public access
-Create a _mysql-flexible-server-template.json_ file and copy this JSON script to create a server using public access connectivity method and also create a database on the server.
+Create an **azuredeploy.json** file with the following content to create a server using public access connectivity method and also create a database on the server. Update the **firewallRules** default value if needed.
```json {
- "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
+ "resourceNamePrefix": {
+ "type": "string",
+ "metadata": {
+ "description": "Provide a prefix for creating resource names."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
"administratorLogin": { "type": "string" }, "administratorLoginPassword": { "type": "securestring" },
- "location": {
- "type": "string"
- },
- "serverName": {
- "type": "string"
+ "firewallRules": {
+ "type": "array",
+ "defaultValue": [
+ {
+ "name": "rule1",
+ "startIPAddress": "192.168.0.1",
+ "endIPAddress": "192.168.0.255"
+ },
+ {
+ "name": "rule2",
+ "startIPAddress": "192.168.1.1",
+ "endIPAddress": "192.168.1.255"
+ }
+ ]
}, "serverEdition": { "type": "string", "defaultValue": "Burstable",
+ "allowedValues": [
+ "Burstable",
+ "Generalpurpose",
+ "MemoryOptimized"
+ ],
"metadata": {
- "description": "The tier of the particular SKU, e.g. Burstable, GeneralPurpose, MemoryOptimized. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
+ "description": "The tier of the particular SKU. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
} },
- "skuName": {
+ "version": {
"type": "string",
- "defaultValue": "Standard_B1ms",
+ "defaultValue": "8.0.21",
+ "allowedValues": [
+ "5.7",
+ "8.0.21"
+ ],
"metadata": {
- "description": "The name of the sku, e.g. Standard_D32ds_v4."
+ "description": "Server version"
} },
- "storageSizeGB": {
- "type": "int"
- },
- "storageIops": {
- "type": "int"
- },
- "storageAutogrow": {
- "type": "string",
- "defaultValue": "Enabled"
- },
"availabilityZone": { "type": "string",
+ "defaultValue": "1",
"metadata": { "description": "Availability Zone information of the server. (Leave blank for No Preference)." } },
- "version": {
- "type": "string"
- },
- "tags": {
- "type": "object",
- "defaultValue": {}
- },
"haEnabled": { "type": "string", "defaultValue": "Disabled",
+ "allowedValues": [
+ "Disabled",
+ "SameZone",
+ "ZoneRedundant"
+ ],
"metadata": { "description": "High availability mode for a server : Disabled, SameZone, or ZoneRedundant" } }, "standbyAvailabilityZone": { "type": "string",
+ "defaultValue": "2",
"metadata": { "description": "Availability zone of the standby server." } },
- "firewallRules": {
- "type": "object",
- "defaultValue": {}
+ "storageSizeGB": {
+ "type": "int",
+ "defaultValue": 20
+ },
+ "storageIops": {
+ "type": "int",
+ "defaultValue": 360
+ },
+ "storageAutogrow": {
+ "type": "string",
+ "defaultValue": "Enabled",
+ "allowedValues": [
+ "Enabled",
+ "Disabled"
+ ]
+ },
+ "skuName": {
+ "type": "string",
+ "defaultValue": "Standard_B1ms",
+ "metadata": {
+ "description": "The name of the sku, e.g. Standard_D32ds_v4."
+ }
}, "backupRetentionDays": {
- "type": "int"
+ "type": "int",
+ "defaultValue": 7
}, "geoRedundantBackup": {
- "type": "string"
+ "type": "string",
+ "defaultValue": "Disabled",
+ "allowedValues": [
+ "Disabled",
+ "Enabled"
+ ]
+ },
+ "serverName": {
+ "type": "string",
+ "defaultValue": "[format('{0}mysqlserver', parameters('resourceNamePrefix'))]"
}, "databaseName": {
- "type": "string"
+ "type": "string",
+ "defaultValue": "[format('{0}mysqldb', parameters('resourceNamePrefix'))]"
} },
- "variables": {
- "api": "2021-05-01",
- "firewallRules": "[parameters('firewallRules').rules]"
- },
"resources": [ { "type": "Microsoft.DBforMySQL/flexibleServers",
- "apiVersion": "[variables('api')]",
- "location": "[parameters('location')]",
+ "apiVersion": "2021-12-01-preview",
"name": "[parameters('serverName')]",
+ "location": "[parameters('location')]",
"sku": { "name": "[parameters('skuName')]", "tier": "[parameters('serverEdition')]"
Create a _mysql-flexible-server-template.json_ file and copy this JSON script to
"mode": "[parameters('haEnabled')]", "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]" },
- "Storage": {
+ "storage": {
"storageSizeGB": "[parameters('storageSizeGB')]", "iops": "[parameters('storageIops')]",
- "autogrow": "[parameters('storageAutogrow')]"
+ "autoGrow": "[parameters('storageAutogrow')]"
},
- "Backup": {
+ "backup": {
"backupRetentionDays": "[parameters('backupRetentionDays')]", "geoRedundantBackup": "[parameters('geoRedundantBackup')]" }
+ }
+ },
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers/databases",
+ "apiVersion": "2021-12-01-preview",
+ "name": "[format('{0}/{1}', parameters('serverName'), parameters('databaseName'))]",
+ "properties": {
+ "charset": "utf8",
+ "collation": "utf8_general_ci"
},
- "tags": "[parameters('tags')]"
+ "dependsOn": [
+ "[resourceId('Microsoft.DBforMySQL/flexibleServers', parameters('serverName'))]"
+ ]
}, {
- "condition": "[greater(length(variables('firewallRules')), 0)]",
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
- "name": "[concat('firewallRules-', copyIndex())]",
"copy": {
- "count": "[if(greater(length(variables('firewallRules')), 0), length(variables('firewallRules')), 1)]",
- "mode": "Serial",
- "name": "firewallRulesIterator"
+ "name": "createFirewallRules",
+ "count": "[length(range(0, if(greater(length(parameters('firewallRules')), 0), length(parameters('firewallRules')), 1)))]",
+ "mode": "serial",
+ "batchSize": 1
},
- "dependsOn": [
- "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
- ],
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2020-10-01",
+ "name": "[format('firewallRules-{0}', range(0, if(greater(length(parameters('firewallRules')), 0), length(parameters('firewallRules')), 1))[copyIndex()])]",
"properties": {
+ "expressionEvaluationOptions": {
+ "scope": "inner"
+ },
"mode": "Incremental",
+ "parameters": {
+ "ip": {
+ "value": "[parameters('firewallRules')[range(0, if(greater(length(parameters('firewallRules')), 0), length(parameters('firewallRules')), 1))[copyIndex()]]]"
+ },
+ "serverName": {
+ "value": "[parameters('serverName')]"
+ }
+ },
"template": {
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
+ "parameters": {
+ "serverName": {
+ "type": "string"
+ },
+ "ip": {
+ "type": "object"
+ }
+ },
"resources": [ { "type": "Microsoft.DBforMySQL/flexibleServers/firewallRules",
- "name": "[concat(parameters('serverName'),'/',variables('firewallRules')[copyIndex()].name)]",
- "apiVersion": "[variables('api')]",
+ "apiVersion": "2021-12-01-preview",
+ "name": "[format('{0}/{1}', parameters('serverName'), parameters('ip').name)]",
"properties": {
- "StartIpAddress": "[variables('firewallRules')[copyIndex()].startIPAddress]",
- "EndIpAddress": "[variables('firewallRules')[copyIndex()].endIPAddress]"
+ "startIpAddress": "[parameters('ip').startIPAddress]",
+ "endIpAddress": "[parameters('ip').endIPAddress]"
} } ] }
- }
- },
- {
- "type": "Microsoft.DBforMySQL/flexibleServers/databases",
- "apiVersion": "[variables('api')]",
- "name": "[concat(parameters('serverName'),'/',parameters('databaseName'))]",
+ },
"dependsOn": [
- "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
- ],
- "properties": {
- "charset": "utf8",
- "collation": "utf8_general_ci"
- }
+ "[resourceId('Microsoft.DBforMySQL/flexibleServers', parameters('serverName'))]"
+ ]
} ] }
Create a _mysql-flexible-server-template.json_ file and copy this JSON script to
## Create a server with private access
-Create a _mysql-flexible-server-template.json_ file and copy this JSON script to create a server using private access connectivity method inside a virtual network.
+Create an **azuredeploy.json** file with the following content to create a server using private access connectivity method inside a virtual network.
```json {
- "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": {
+ "resourceNamePrefix": {
+ "type": "string",
+ "metadata": {
+ "description": "Provide a prefix for creating resource names."
+ }
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
"administratorLogin": { "type": "string" }, "administratorLoginPassword": { "type": "securestring" },
- "location": {
- "type": "string"
- },
- "serverName": {
- "type": "string"
+ "firewallRules": {
+ "type": "array",
+ "defaultValue": [
+ {
+ "name": "rule1",
+ "startIPAddress": "192.168.0.1",
+ "endIPAddress": "192.168.0.255"
+ },
+ {
+ "name": "rule2",
+ "startIPAddress": "192.168.1.1",
+ "endIPAddress": "192.168.1.255"
+ }
+ ]
}, "serverEdition": { "type": "string", "defaultValue": "Burstable",
+ "allowedValues": [
+ "Burstable",
+ "Generalpurpose",
+ "MemoryOptimized"
+ ],
"metadata": {
- "description": "The tier of the particular SKU, e.g. Burstable, GeneralPurpose, MemoryOptimized. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
+ "description": "The tier of the particular SKU. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
} },
- "skuName": {
+ "version": {
"type": "string",
- "defaultValue": "Standard_B1ms",
+ "defaultValue": "8.0.21",
+ "allowedValues": [
+ "5.7",
+ "8.0.21"
+ ],
"metadata": {
- "description": "The name of the sku, e.g. Standard_D32ds_v4."
+ "description": "Server version"
} },
- "storageSizeGB": {
- "type": "int"
- },
- "storageIops": {
- "type": "int"
- },
- "storageAutogrow": {
- "type": "string",
- "defaultValue": "Enabled"
- },
"availabilityZone": { "type": "string",
+ "defaultValue": "1",
"metadata": { "description": "Availability Zone information of the server. (Leave blank for No Preference)." } },
- "version": {
- "type": "string"
- },
- "tags": {
- "type": "object",
- "defaultValue": {}
- },
"haEnabled": { "type": "string", "defaultValue": "Disabled",
+ "allowedValues": [
+ "Disabled",
+ "SameZone",
+ "ZoneRedundant"
+ ],
"metadata": { "description": "High availability mode for a server : Disabled, SameZone, or ZoneRedundant" } }, "standbyAvailabilityZone": { "type": "string",
+ "defaultValue": "2",
"metadata": { "description": "Availability zone of the standby server." } },
- "vnetName": {
- "type": "string",
- "defaultValue": "azure_mysql_vnet",
- "metadata": { "description": "Virtual Network Name" }
+ "storageSizeGB": {
+ "type": "int",
+ "defaultValue": 20
},
- "subnetName": {
- "type": "string",
- "defaultValue": "azure_mysql_subnet",
- "metadata": { "description": "Subnet Name" }
+ "storageIops": {
+ "type": "int",
+ "defaultValue": 360
},
- "vnetAddressPrefix": {
+ "storageAutogrow": {
"type": "string",
- "defaultValue": "10.0.0.0/16",
- "metadata": { "description": "Virtual Network Address Prefix" }
+ "defaultValue": "Enabled",
+ "allowedValues": [
+ "Enabled",
+ "Disabled"
+ ]
},
- "subnetPrefix": {
+ "skuName": {
"type": "string",
- "defaultValue": "10.0.0.0/24",
- "metadata": { "description": "Subnet Address Prefix" }
+ "defaultValue": "Standard_B1ms",
+ "metadata": {
+ "description": "The name of the sku, e.g. Standard_D32ds_v4."
+ }
}, "backupRetentionDays": {
- "type": "int"
+ "type": "int",
+ "defaultValue": 7
}, "geoRedundantBackup": {
- "type": "string"
+ "type": "string",
+ "defaultValue": "Disabled",
+ "allowedValues": [
+ "Disabled",
+ "Enabled"
+ ]
+ },
+ "serverName": {
+ "type": "string",
+ "defaultValue": "[format('{0}mysqlserver', parameters('resourceNamePrefix'))]"
}, "databaseName": {
- "type": "string"
+ "type": "string",
+ "defaultValue": "[format('{0}mysqldb', parameters('resourceNamePrefix'))]"
} },
- "variables": {
- "api": "2021-05-01"
- },
"resources": [
- {
- "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2021-05-01",
- "name": "[parameters('vnetName')]",
- "location": "[parameters('location')]",
- "properties": {
- "addressSpace": {
- "addressPrefixes": [
- "[parameters('vnetAddressPrefix')]"
- ]
- }
- }
- },
- {
- "type": "Microsoft.Network/virtualNetworks/subnets",
- "apiVersion": "2021-05-01",
- "name": "[concat(parameters('vnetName'),'/',parameters('subnetName'))]",
- "dependsOn": [
- "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'))]"
- ],
- "properties": {
- "addressPrefix": "[parameters('subnetPrefix')]",
- "delegations": [
- {
- "name": "MySQLflexibleServers",
- "properties": {
- "serviceName": "Microsoft.DBforMySQL/flexibleServers"
- }
- }
- ]
- }
- },
{ "type": "Microsoft.DBforMySQL/flexibleServers",
- "apiVersion": "[variables('api')]",
- "location": "[parameters('location')]",
+ "apiVersion": "2021-12-01-preview",
"name": "[parameters('serverName')]",
- "dependsOn": [
- "[resourceID('Microsoft.Network/virtualNetworks/subnets/', parameters('vnetName'), parameters('subnetName'))]"
- ],
+ "location": "[parameters('location')]",
"sku": { "name": "[parameters('skuName')]", "tier": "[parameters('serverEdition')]"
Create a _mysql-flexible-server-template.json_ file and copy this JSON script to
"mode": "[parameters('haEnabled')]", "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]" },
- "Storage": {
+ "storage": {
"storageSizeGB": "[parameters('storageSizeGB')]", "iops": "[parameters('storageIops')]",
- "autogrow": "[parameters('storageAutogrow')]"
- },
- "network": {
- "delegatedSubnetResourceId": "[resourceID('Microsoft.Network/virtualNetworks/subnets', parameters('vnetName'), parameters('subnetName'))]"
+ "autoGrow": "[parameters('storageAutogrow')]"
},
- "Backup": {
+ "backup": {
"backupRetentionDays": "[parameters('backupRetentionDays')]", "geoRedundantBackup": "[parameters('geoRedundantBackup')]" }
- },
- "tags": "[parameters('tags')]"
+ }
}, { "type": "Microsoft.DBforMySQL/flexibleServers/databases",
- "apiVersion": "[variables('api')]",
- "name": "[concat(parameters('serverName'),'/',parameters('databaseName'))]",
- "dependsOn": [
- "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
- ],
+ "apiVersion": "2021-12-01-preview",
+ "name": "[format('{0}/{1}', parameters('serverName'), parameters('databaseName'))]",
"properties": { "charset": "utf8", "collation": "utf8_general_ci"
- }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.DBforMySQL/flexibleServers', parameters('serverName'))]"
+ ]
+ },
+ {
+ "copy": {
+ "name": "createFirewallRules",
+ "count": "[length(range(0, if(greater(length(parameters('firewallRules')), 0), length(parameters('firewallRules')), 1)))]",
+ "mode": "serial",
+ "batchSize": 1
+ },
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2020-10-01",
+ "name": "[format('firewallRules-{0}', range(0, if(greater(length(parameters('firewallRules')), 0), length(parameters('firewallRules')), 1))[copyIndex()])]",
+ "properties": {
+ "expressionEvaluationOptions": {
+ "scope": "inner"
+ },
+ "mode": "Incremental",
+ "parameters": {
+ "ip": {
+ "value": "[parameters('firewallRules')[range(0, if(greater(length(parameters('firewallRules')), 0), length(parameters('firewallRules')), 1))[copyIndex()]]]"
+ },
+ "serverName": {
+ "value": "[parameters('serverName')]"
+ }
+ },
+ "template": {
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "serverName": {
+ "type": "string"
+ },
+ "ip": {
+ "type": "object"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers/firewallRules",
+ "apiVersion": "2021-12-01-preview",
+ "name": "[format('{0}/{1}', parameters('serverName'), parameters('ip').name)]",
+ "properties": {
+ "startIpAddress": "[parameters('ip').startIPAddress]",
+ "endIpAddress": "[parameters('ip').endIPAddress]"
+ }
+ }
+ ]
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.DBforMySQL/flexibleServers', parameters('serverName'))]"
+ ]
}- ] } ``` ## Deploy the template
-Select **Try it** from the following PowerShell code block to open [Azure Cloud Shell](../../cloud-shell/overview.md).
+Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+# [CLI](#tab/CLI)
-```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter a name for the new Azure Database for MySQL server"
-$resourceGroupName = Read-Host -Prompt "Enter a name for the new resource group where the server will exist"
-$location = Read-Host -Prompt "Enter an Azure region (for example, centralus) for the resource group"
-$adminUser = Read-Host -Prompt "Enter the Azure Database for MySQL server's administrator account name"
-$adminPassword = Read-Host -Prompt "Enter the administrator password" -AsSecureString
+```azurecli
+az group create --name exampleRG --location eastus
+az deployment group create --resource-group exampleRG --template-file azuredeploy.json
+```
-New-AzResourceGroup -Name $resourceGroupName -Location $location # Use this command when you need to create a new resource group for your deployment
-New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName `
- -TemplateFile "D:\Azure\Templates\EngineeringSite.json
- -serverName $serverName `
- -administratorLogin $adminUser `
- -administratorLoginPassword $adminPassword
+# [PowerShell](#tab/PowerShell)
-Read-Host -Prompt "Press [ENTER] to continue ..."
+```azurepowershell
+New-AzResourceGroup -Name exampleRG -Location eastus
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile azuredeploy.json
```
-## Review deployed resources
+
-Follow these steps to verify if your server was created in Azure.
+Follow the instructions to enter the parameter values. When the deployment finishes, you should see a message indicating the deployment succeeded.
-### Azure portal
+## Review deployed resources
-1. In the [Azure portal](https://portal.azure.com), search for and select **Azure Database for MySQL servers**.
-1. In the database list, select your new server. The **Overview** page for your new Azure Database for MySQL server appears.
+Follow these steps to verify if your server was created in the resource group.
-### PowerShell
+# [CLI](#tab/CLI)
-You'll have to enter the name of the new server to view the details of your Azure Database for MySQL Flexible Server.
+```azurecli
+az resource list --resource-group exampleRG
-```azurepowershell-interactive
-$serverName = Read-Host -Prompt "Enter the name of your Azure Database for MySQL server"
-Get-AzResource -ResourceType "Microsoft.DBforMySQL/flexibleServers" -Name $serverName | ft
-Write-Host "Press [ENTER] to continue..."
```
-### CLI
+# [PowerShell](#tab/PowerShell)
-You'll have to enter the name and the resource group of the new server to view details about your Azure Database for MySQL Flexible Server.
-
-```azurecli-interactive
-echo "Enter your Azure Database for MySQL server name:" &&
-read serverName &&
-echo "Enter the resource group where the Azure Database for MySQL server exists:" &&
-read resourcegroupName &&
-az resource show --resource-group $resourcegroupName --name $serverName --resource-type "Microsoft.DbForMySQL/flexibleServers"
+```azurepowershell
+Get-AzResource -ResourceGroupName exampleRG
```+ ## Clean up resources
-Keep this resource group, server, and single database if you want to go to the [Next steps](#next-steps). The next steps show you how to connect and query your database using different methods.
-
-To delete the resource group:
+To delete the resource group and the resources contained in the resource group:
-### Azure portal
+# [CLI](#tab/CLI)
-1. In the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
-1. In the resource group list, choose the name of your resource group.
-1. In the **Overview** page of your resource group, select **Delete resource group**.
-1. In the confirmation dialog box, type the name of your resource group, and then select **Delete**.
-
-### PowerShell
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-Remove-AzResourceGroup -Name $resourceGroupName
-Write-Host "Press [ENTER] to continue..."
+```azurecli
+az group delete --name exampleRG
```
-### CLI
+# [PowerShell](#tab/PowerShell)
-```azurecli-interactive
-echo "Enter the Resource Group name:" &&
-read resourceGroupName &&
-az group delete --name $resourceGroupName &&
-echo "Press [ENTER] to continue ..."
+```azurepowershell
+Remove-AzResourceGroup -Name exampleRG
``` + ## Next steps For a step-by-step tutorial that guides you through the process of creating an ARM template, see:
mysql Quickstart Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-bicep.md
+
+ Title: 'Quickstart: Create an Azure DB for MySQL - Flexible Server - Bicep'
+description: In this Quickstart, learn how to create an Azure Database for MySQL - Flexible Server by using Bicep.
+++++ Last updated : 02/16/2023++
+# Quickstart: Use a Bicep file to create an Azure Database for MySQL - Flexible Server
++++
+## Prerequisites
+
+- An Azure account with an active subscription.
++
+## Create server with public access
+
+Create a **main.bicep** file and a **crateFirewallRules.bicep** file with the following content to create a server using public access connectivity method and also create a database on the server. Update the **firewallRules** default value if needed.
+
+**main.bicep**
+
+```bicep
+@description('Provide a prefix for creating resource names.')
+param resourceNamePrefix string
+
+@description('Provide the location for all the resources.')
+param location string = resourceGroup().location
+
+@description('Provide the administrator login name for the MySQL server.')
+param administratorLogin string
+
+@description('Provide the administrator login password for the MySQL server.')
+@secure()
+param administratorLoginPassword string
+
+@description('Provide an array of firewall rules to be applied to the MySQL server.')
+param firewallRules array = [
+ {
+ name: 'rule1'
+ startIPAddress: '192.168.0.1'
+ endIPAddress: '192.168.0.255'
+ }
+ {
+ name: 'rule2'
+ startIPAddress: '192.168.1.1'
+ endIPAddress: '192.168.1.255'
+ }
+]
+
+@description('The tier of the particular SKU. High Availability is available only for GeneralPurpose and MemoryOptimized sku.')
+@allowed([
+ 'Burstable'
+ 'Generalpurpose'
+ 'MemoryOptimized'
+])
+param serverEdition string = 'Burstable'
+
+@description('Server version')
+@allowed([
+ '5.7'
+ '8.0.21'
+])
+param version string = '8.0.21'
+
+@description('Availability Zone information of the server. (Leave blank for No Preference).')
+param availabilityZone string = '1'
+
+@description('High availability mode for a server : Disabled, SameZone, or ZoneRedundant')
+@allowed([
+ 'Disabled'
+ 'SameZone'
+ 'ZoneRedundant'
+])
+param haEnabled string = 'Disabled'
+
+@description('Availability zone of the standby server.')
+param standbyAvailabilityZone string = '2'
+
+param storageSizeGB int = 20
+param storageIops int = 360
+@allowed([
+ 'Enabled'
+ 'Disabled'
+])
+param storageAutogrow string = 'Enabled'
+
+@description('The name of the sku, e.g. Standard_D32ds_v4.')
+param skuName string = 'Standard_B1ms'
+
+param backupRetentionDays int = 7
+@allowed([
+ 'Disabled'
+ 'Enabled'
+])
+param geoRedundantBackup string = 'Disabled'
+
+param serverName string = '${resourceNamePrefix}sqlserver'
+param databaseName string = '${resourceNamePrefix}mysqldb'
+
+resource server 'Microsoft.DBforMySQL/flexibleServers@2021-12-01-preview' = {
+ location: location
+ name: serverName
+ sku: {
+ name: skuName
+ tier: serverEdition
+ }
+ properties: {
+ version: version
+ administratorLogin: administratorLogin
+ administratorLoginPassword: administratorLoginPassword
+ availabilityZone: availabilityZone
+ highAvailability: {
+ mode: haEnabled
+ standbyAvailabilityZone: standbyAvailabilityZone
+ }
+ storage: {
+ storageSizeGB: storageSizeGB
+ iops: storageIops
+ autoGrow: storageAutogrow
+ }
+ backup: {
+ backupRetentionDays: backupRetentionDays
+ geoRedundantBackup: geoRedundantBackup
+ }
+ }
+}
+
+@batchSize(1)
+module createFirewallRules './CreateFirewallRules.bicep' = [for i in range(0, ((length(firewallRules) > 0) ? length(firewallRules) : 1)): {
+ name: 'firewallRules-${i}'
+ params: {
+ ip: firewallRules[i]
+ serverName: serverName
+ }
+ dependsOn: [
+ server
+ ]
+}]
+
+resource database 'Microsoft.DBforMySQL/flexibleServers/databases@2021-12-01-preview' = {
+ parent: server
+ name: databaseName
+ properties: {
+ charset: 'utf8'
+ collation: 'utf8_general_ci'
+ }
+}
+```
+
+**crateFirewallRules.bicep**
+
+```bicep
+param serverName string
+param ip object
+
+resource firewallRules 'Microsoft.DBforMySQL/flexibleServers/firewallRules@2021-12-01-preview' = {
+ name: '${serverName}/${ip.name}'
+ properties: {
+ startIpAddress: ip.startIPAddress
+ endIpAddress: ip.endIPAddress
+ }
+}
+```
+
+Save the two Bicep files in the same directly.
+
+## Create a server with private access
+
+Create an **main.bicep** file with the following content to create a server using private access connectivity method inside a virtual network.
+
+```bicep
+@description('Provide a prefix for creating resource names.')
+param resourceNamePrefix string
+
+@description('Provide the location for all the resources.')
+param location string = resourceGroup().location
+
+@description('Provide the administrator login name for the MySQL server.')
+param administratorLogin string
+
+@description('Provide the administrator login password for the MySQL server.')
+@secure()
+param administratorLoginPassword string
+
+@description('Provide Virtual Network Address Prefix')
+param vnetAddressPrefix string = '10.0.0.0/16'
+
+@description('Provide Subnet Address Prefix')
+param subnetPrefix string = '10.0.0.0/24'
+
+@description('Provide the tier of the particular SKU. High Availability is available only for GeneralPurpose and MemoryOptimized sku.')
+@allowed([
+ 'Burstable'
+ 'Generalpurpose'
+ 'MemoryOptimized'
+])
+param serverEdition string = 'Burstable'
+
+@description('Provide Server version')
+@allowed([
+ '5.7'
+ '8.0.21'
+])
+param serverVersion string = '8.0.21'
+
+@description('Provide the availability zone information of the server. (Leave blank for No Preference).')
+param availabilityZone string = '1'
+
+@description('Provide the high availability mode for a server : Disabled, SameZone, or ZoneRedundant')
+@allowed([
+ 'Disabled'
+ 'SameZone'
+ 'ZoneRedundant'
+])
+param haEnabled string = 'Disabled'
+
+@description('Provide the availability zone of the standby server.')
+param standbyAvailabilityZone string = '2'
+
+param storageSizeGB int = 20
+param storageIops int = 360
+@allowed([
+ 'Enabled'
+ 'Disabled'
+])
+param storageAutogrow string = 'Enabled'
+
+@description('The name of the sku, e.g. Standard_D32ds_v4.')
+param skuName string = 'Standard_B1ms'
+
+param backupRetentionDays int = 7
+@allowed([
+ 'Disabled'
+ 'Enabled'
+])
+param geoRedundantBackup string = 'Disabled'
+
+var serverName = '${resourceNamePrefix}mysqlserver'
+var databaseName = '${resourceNamePrefix}mysqldatabase'
+var vnetName = '${resourceNamePrefix}mysqlvnet'
+var subnetName = '${resourceNamePrefix}mysqlsubnet'
+
+resource vnet 'Microsoft.Network/virtualNetworks@2022-07-01' = {
+ name: vnetName
+ location: location
+ properties: {
+ addressSpace: {
+ addressPrefixes: [
+ vnetAddressPrefix
+ ]
+ }
+ }
+}
+
+resource subnet 'Microsoft.Network/virtualNetworks/subnets@2022-07-01' = {
+ parent: vnet
+ name: subnetName
+ properties: {
+ addressPrefix: subnetPrefix
+ delegations: [
+ {
+ name: 'MySQLflexibleServers'
+ properties: {
+ serviceName: 'Microsoft.DBforMySQL/flexibleServers'
+ }
+ }
+ ]
+ }
+}
+
+resource server 'Microsoft.DBforMySQL/flexibleServers@2021-12-01-preview' = {
+ location: location
+ name: serverName
+ sku: {
+ name: skuName
+ tier: serverEdition
+ }
+ properties: {
+ version: serverVersion
+ administratorLogin: administratorLogin
+ administratorLoginPassword: administratorLoginPassword
+ availabilityZone: availabilityZone
+ highAvailability: {
+ mode: haEnabled
+ standbyAvailabilityZone: standbyAvailabilityZone
+ }
+ storage: {
+ storageSizeGB: storageSizeGB
+ iops: storageIops
+ autoGrow: storageAutogrow
+ }
+ network: {
+ delegatedSubnetResourceId: subnet.id
+ }
+ backup: {
+ backupRetentionDays: backupRetentionDays
+ geoRedundantBackup: geoRedundantBackup
+ }
+ }
+}
+
+resource database 'Microsoft.DBforMySQL/flexibleServers/databases@2021-12-01-preview' = {
+ parent: server
+ name: databaseName
+ properties: {
+ charset: 'utf8'
+ collation: 'utf8_general_ci'
+ }
+}
+
+```
+
+## Deploy the template
+
+Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group create --name exampleRG --location eastus
+az deployment group create --resource-group exampleRG --template-file main.bicep
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+New-AzResourceGroup -Name exampleRG -Location eastus
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile main.bicep
+```
+++
+Follow the instructions to enter the parameter values. When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Follow these steps to verify if your server was created in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az resource list --resource-group exampleRG
+
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Get-AzResource -ResourceGroupName exampleRG
+```
++
+## Clean up resources
+
+To delete the resource group and the resources contained in the resource group:
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+For a step-by-step tutorial that guides you through the process of creating an ARM template, see:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create Bicep files with Visual Studio Code](../../azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md)
+
+For a step-by-step tutorial to build an app with App Service using MySQL, see:
+
+> [!div class="nextstepaction"]
+> [Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
network-watcher Diagnose Communication Problem Between Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md
Title: 'Tutorial: Diagnose communication problem between networks using the Azure portal'
+ Title: 'Tutorial: Diagnose communication problem between virtual networks - Azure portal'
-description: In this tutorial, learn how to diagnose a communication problem between an Azure virtual network connected to an on-premises, or other virtual network, through an Azure virtual network gateway, using Network Watcher's VPN diagnostics capability.
+description: In this tutorial, you learn how to use Azure Network Watcher VPN troubleshoot to diagnose a communication problem between two Azure virtual networks connected by Azure VPN gateways.
Previously updated : 01/07/2021 Last updated : 02/23/2023 -
-# Customer intent: I need to determine why resources in a virtual network can't communicate with resources in a different network.
+
+# Customer intent: I need to determine why resources in a virtual network can't communicate with resources in a different virtual network over a VPN connection.
-# Tutorial: Diagnose a communication problem between networks using the Azure portal
+# Tutorial: Diagnose a communication problem between virtual networks using the Azure portal
-A virtual network gateway connects an Azure virtual network to an on-premises or other virtual network. In this tutorial, you learn how to:
+Azure VPN gateway is a type of virtual network gateway that you can use to send encrypted traffic between an Azure virtual network and your on-premises locations over the public internet. You can also use VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network. A VPN gateway allows you to create multiple connections to on-premises VPN devices and Azure VPN gateways. For more information about the number of connections that you can create with each VPN gateway SKU, see [Gateway SKUs](../../articles/vpn-gateway/vpn-gateway-about-vpngateways.md#gwsku). Whenever you need to troubleshoot an issue with a VPN gateway or one of its connections, you can use Azure Network Watcher VPN troubleshoot to help you checking the VPN gateway or its connections to find and resolve the problem in easy and simple steps.
-> [!div class="checklist"]
-> * Diagnose a problem with a virtual network gateway with Network Watcher's VPN diagnostics capability
-> * Diagnose a problem with a gateway connection
-> * Resolve a problem with a gateway
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+This tutorial helps you connect two virtual networks via VPN gateways using VNet-to-VNet connections and use Network Watcher VPN troubleshoot capability to diagnose and troubleshoot a connectivity issue that's preventing the two virtual networks from communicating with each other. Once you find the issue and resolve it, you check the connectivity between the two virtual networks to verify the problem was resolved.
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> * Create virtual networks
+> * Create virtual network gateways (VPN gateways)
+> * Create connections between VPN gateways
+> * Diagnose and troubleshoot a connectivity issue
+> * Resolve the problem
+> * Verify the problem is resolved
## Prerequisites
-To use VPN diagnostics, you must have an existing, running VPN gateway. If you don't have an existing VPN gateway to diagnose, you can deploy one using a [PowerShell script](../vpn-gateway/scripts/vpn-gateway-sample-site-to-site-powershell.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json). You can run the PowerShell script from:
-- **A local PowerShell installation**: The script requires the Azure PowerShell `Az` module. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell](/powershell/azure/install-Az-ps). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.-- **The Azure Cloud Shell**: The [Azure Cloud Shell](https://shell.azure.com/powershell) has the latest version of PowerShell installed and configured, and logs you into Azure.-
-The script takes approximately an hour to create a VPN gateway. The remaining steps assume that the gateway you're diagnosing is the one deployed by this script. If you diagnose your own existing gateway instead, your results will vary.
+- An Azure account with an active subscription. create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com).
-## Enable Network Watcher
+## Create virtual networks
+
+In this section, you create two virtual networks that you connect later using virtual network gateways.
+
+### Create first virtual network
+
+1. In the search box at the top of the portal, enter *virtual network*. Select **Virtual networks** in the search results.
+
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/virtual-network-azure-portal.png" alt-text="Screenshot shows searching for virtual networks in the Azure portal.":::
+
+1. Select **+ Create**. In **Create virtual network**, enter or select the following values in the **Basics** tab:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **Create new**. </br> Enter *myResourceGroup* in **Name**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter *myVNet1*. |
+ | Region | Select **East US**. |
+
+1. Select the **IP Addresses** tab, or select **Next: IP Addresses** button at the bottom of the page.
+
+1. Enter the following values in the **IP Addresses** tab:
+
+ | Setting | Value |
+ | | |
+ | IPv4 address space | Enter *10.1.0.0/16*. |
+ | Subnet name | Enter *mySubnet*. |
+ | Subnet address range | Enter *10.1.0.0/24*. |
+
+1. Select the **Review + create** tab or select the **Review + create** button at the bottom of the page.
+
+1. Review the settings, and then select **Create**.
+
+### Create second virtual network
+
+Repeat the previous steps to create the second virtual network using the following values:
+
+| Setting | Value |
+| | |
+| Name | **myVNet2** |
+| IPv4 address space | **10.2.0.0/16** |
+| Subnet name | **mySubnet** |
+| Subnet address range | **10.2.0.0/24** |
+
+## Create a storage account and a container
+
+In this section, you create a storage account, then you create a container in it.
+
+If you have a storage account that you want to use, you can skip the following steps and go to [Create VPN gateways](#create-vpn-gateways).
+
+1. In the search box at the top of the portal, enter *storage account*. Select **Storage accounts** in the search results.
+
+1. Select **+ Create**. In **Create a storage account**, enter or select the following values in the **Basics** tab:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Storage account name | Enter a unique name. This tutorial uses **mynwstorageaccount**. |
+ | Region | Select **(US) East US**. |
+ | Performance | Select **Standard**. |
+ | Redundancy | Select **Locally-redundant storage (LRS)**. |
+
+1. Select the **Networking** tab, or select **Next: Advanced** and then **Next: Networking** button at the bottom of the page.
+
+1. Under **Network connectivity**, select **Enable public access from all networks**.
+
+1. Select the **Review** tab or select the **Review** button.
+
+1. Review the settings, and then select **Create**.
+
+1. Select **Go to resource** to go to the **Overview** page of **mynwstorageaccount**.
+
+1. Under **Data storage**, select **Containers**.
+
+1. Select **+ Container**.
+
+1. In **New container**, enter or select the following values then select **Create**.
+
+ | Setting | Value |
+ | | |
+ | Name | Enter *vpn*. |
+ | Public access level | Select **Private (no anonymous access)**. |
+
+## Create VPN gateways
+
+In this section, you create two VPN gateways that will be used to connect the two virtual networks you created previously.
+
+### Create first VPN gateway
-If you already have a network watcher enabled in the East US region, skip to [Diagnose a gateway](#diagnose-a-gateway).
+1. In the search box at the top of the portal, enter *virtual network gateway*. Select **Virtual network gateways** in the search results.
-1. In the portal, select **All services**. In the **Filter box**, enter *Network Watcher*. When **Network Watcher** appears in the results, select it.
-2. Select **Regions**, to expand it, and then select **...** to the right of **East US**, as shown in the following picture:
+1. Select **+ Create**. In **Create virtual network gateway**, enter or select the following values in the **Basics** tab:
- ![Enable Network Watcher](./media/diagnose-communication-problem-between-networks/enable-network-watcher.png)
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | **Instance details** | |
+ | Name | Enter *VNet1GW*. |
+ | Region | Select **East US**. |
+ | Gateway type | Select **VPN**. |
+ | VPN type | Select **Route-based**. |
+ | SKU | Select **VpnGw1**. |
+ | Generation | Select **Generation1**. |
+ | Virtual network | Select **myVNet1**. |
+ | Gateway subnet address range | Enter *10.1.1.0/27*. |
+ | **Public IP address** | |
+ | Public IP address | Select **Create new**. |
+ | Public IP address name | Enter *VNet1GW-ip*. |
+ | Enable active-active mode | Select **Disabled**. |
+ | Configure BGP | Select **Disabled**. |
-3. Select **Enable Network Watcher**.
+1. Select **Review + create**.
-## Diagnose a gateway
+1. Review the settings, and then select **Create**. A gateway can take 45 minutes or more to fully create and deploy.
-1. On the left side of the portal, select **All services**.
-2. Start typing *network watcher* in the **Filter** box. When **Network Watcher** appears in the search results, select it.
-3. Under **NETWORK DIAGNOSTIC TOOLS**, select **VPN Diagnostics**.
-4. Select **Storage account**, and then select the storage account you want to write diagnostic information to.
-5. From the list of **Storage accounts**, select the storage account you want to use. If you don't have an existing storage account, select **+ Storage account**, enter, or select the required information, and then select **Create**, to create one. If you created a VPN gateway using the script in [prerequisites](#prerequisites), you may want to create the storage account in the same resource group, *TestRG1*, as the gateway.
-6. From the list of **Containers**, select the container you want to use, and then select **Select**. If you don't have any containers, select **+ Container**, enter a name for the container, then select **OK**.
-7. Select a gateway, and then select **Start troubleshooting**. As shown in the following picture, the test is run against a gateway named **Vnet1GW**:
+### Create second VPN gateway
- ![VPN diagnostics](./media/diagnose-communication-problem-between-networks/vpn-diagnostics.png)
+To create the second VPN gateway, repeat the previous steps you used to create the first VPN gateway with the following values:
-8. While the test is running, **Running** appears in the **TROUBLESHOOTING STATUS** column where **Not started** is shown, in the previous picture. The test may take several minutes to run.
-9. View the status of a completed test. The following picture shows the status results of a completed diagnostic test:
+| Setting | Value |
+| | |
+| Name | **VNet2GW**. |
+| Virtual network | **myVNet2**. |
+| Gateway subnet address range | **10.2.1.0/27**. |
+| Public IP address name | **VNet2GW-ip**. |
- ![Screenshot shows the status results of a diagnostic test, unhealthy in this example, including a summary and detail.](./media/diagnose-communication-problem-between-networks/status.png)
+## Create gateway connections
- You can see that the **TROUBLESHOOTING STATUS** is **Unhealthy**, as well as a **Summary** and **Detail** of the problem on the **Status** tab.
-10. When you select the **Action** tab, VPN diagnostics provides additional information. In the example, shown in the following picture, VPN diagnostics lets you know that you should check the health of each connection:
+After creating **VNet1GW** and **VNet2GW** virtual network gateways, you can create connections between them to allow communication over secure IPsec/IKE tunnel between **VNet1** and **VNet2** virtual networks. To create the IPsec/IKE tunnel, you create two connections:
- ![Screenshot shows the Action tab, which gives you additional information.](./media/diagnose-communication-problem-between-networks/action.png)
+- From **VNet1** to **VNet2**
+- From **VNet2** to **VNet1**
-## Diagnose a gateway connection
+### Create first connection
-A gateway is connected to other networks via a gateway connection. Both the gateway and gateway connections must be healthy for successful communication between a virtual network and a connected network.
+1. Go to **VNet1GW** gateway.
-1. Complete step 7 of [Diagnose a gateway](#diagnose-a-gateway) again, this time, selecting a connection. In the following example, a connection named **VNet1toSite1** is tested:
+1. Under **Settings**, select **Connections**.
- ![Screenshot shows how to start troubleshooting for a selected connection.](./media/diagnose-communication-problem-between-networks/connection.png)
+1. Select **+ Add** to create a connection from **VNet1** to **VNet2**.
- The test runs for several minutes.
-2. After the test of the connection is complete, you receive results similar to the results shown in the following pictures on the **Status** and **Action** tabs:
+1. In **Add connection**, enter or select the following values:
- ![Connection status](./media/diagnose-communication-problem-between-networks/connection-status.png)
+ | Setting | Value |
+ | | |
+ | Name | Enter *to-VNet2*. |
+ | Connection type | Select **VNet-to-VNet**. |
+ | Second virtual network gateway | Select **VNet2GW**. |
+ | Shared key (PSK) | Enter *123*. |
- ![Connection action](./media/diagnose-communication-problem-between-networks/connection-action.png)
+1. Select **OK**.
- VPN diagnostics informs you what is wrong on the **Status** tab, and gives you several suggestions for what may be causing the problem on the **Action** tab.
+### Create second connection
- If the gateway you tested was the one deployed by the [script](../vpn-gateway/scripts/vpn-gateway-sample-site-to-site-powershell.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) in [Prerequisites](#prerequisites), then the problem on the **Status** tab, and the first two items on the **Actions** tab are exactly what the problem is. The script configures a placeholder IP address, 23.99.221.164, for the on-premises VPN gateway device.
+1. Go to **VNet2GW** gateway.
- To resolve the issue, you need to ensure that your on-premises VPN gateway is [configured properly](../vpn-gateway/vpn-gateway-about-vpn-devices.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json), and change the IP address configured by the script for the local network gateway, to the actual public address of your on-premises VPN gateway.
+1. Create the second connection by following the previous steps you used to create the first connection with the following values:
+
+ | Setting | Value |
+ | | |
+ | Name | **to-VNet1** |
+ | Second virtual network gateway | **VNet1GW** |
+ | Shared key (PSK) | **000** |
+
+ > [!NOTE]
+ > To successfully create an IPsec/IKE tunnel between two Azure VPN gateways, the connections between the gateways must use identical shared keys. In the previous steps, two different keys were used to create a problem with the gateway connections.
+
+## Diagnose the VPN problem
+
+In this section, you use Network Watcher VPN troubleshoot to check the two VPN gateways and their connections.
+
+1. Under **Settings** of **VNet2GW** gateway, select **Connection**.
+
+1. Select **Refresh** to see the connections and their current status, which is **Not connected** (because of mismatch between the shared keys).
+
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/second-gateway-connections-not-connected.png" alt-text="Screenshot shows the gateway connections in the Azure portal and their not connected status.":::
+
+1. Under **Help** of **VNet2GW** gateway, select **VPN troubleshoot**.
+
+1. Select **Select storage account** to choose the storage account and the container that you want to save the logs to.
+
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/second-gateway-vpn-troubleshoot-not-started.png" alt-text="Screenshot shows vpn troubleshoot in the Azure portal before troubleshooting started.":::
+
+1. From the list, select **VNet1GW** and **VNet2GW**, and then select **Start troubleshooting** to start checking the gateways.
+
+1. Once the check is completed, the troubleshooting status of both gateways changes to **Unhealthy**. Select a gateway to see more details under **Status** tab.
+
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/gateway-unhealthy.png" alt-text="Screenshot shows the status of a gateway and results of VPN troubleshoot test in the Azure portal after troubleshooting completed.":::
+
+1. Because the VPN tunnels are disconnected, select the connections, and then select **Start troubleshooting** to start checking them.
+
+ > [!NOTE]
+ > You can troubleshoot gateways and their connections in one step. However, checking only the gateways takes less time and based on the result, you decide if you need to check the connections.
+
+1. Once the check is completed, the troubleshooting status of the connections changes to **Unhealthy**. Select a connection to see more details under **Status** tab.
+
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/connection-unhealthy.png" alt-text="Screenshot shows the status of a connection and results of VPN troubleshoot test in the Azure portal after troubleshooting completed.":::
+
+ VPN troubleshoot checked the connections and found a mismatch in the shared keys.
+
+## Fix the problem and verify using VPN troubleshoot
+
+### Fix the problem
+
+Fix the problem by correcting the key on **to-VNet1** connection to match the key on **to-VNet2** connection.
+
+1. Go to **to-VNet1** connection.
+
+1. Under **Settings**, select **Shared key**.
+
+1. In **Shared key (PSK)**, enter *123* and then select **Save**.
+
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/correct-shared-key.png" alt-text="Screenshot shows correcting and saving the shared key for of VPN connection in the Azure portal.":::
+
+### Check connection status
+
+1. Go to **VNet2GW** gateway (you can check the connections status from **VNet1GW** gateway too).
+
+1. Under **Settings**, select **Connections**.
+
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/second-gateway-connections-connected.png" alt-text="Screenshot shows the gateway connections in the Azure portal and their connected status.":::
+
+ > [!NOTE]
+ > You may need to wait for a few minutes and then select **Refresh** to see the connections status as **Connected**.
+
+### Check connection health with VPN troubleshoot
+
+1. Under **Help** of **VNet2GW**, select **VPN troubleshoot**.
+
+1. Select **Select storage account** to choose the storage account and the container that you want to save the logs to.
+
+1. Select **VNet1GW** and **VNet2GW**, and then select **Start troubleshooting** to start checking the gateways
+
+ :::image type="content" source="./media/diagnose-communication-problem-between-networks/connection-healthy.png" alt-text="Screenshot shows the status of gateways and their connections in the Azure portal after correcting the shared key.":::
## Clean up resources
-If you created a VPN gateway using the script in the [prerequisites](#prerequisites) solely to complete this tutorial, and no longer need it, delete the resource group and all of the resources it contains:
+If you're no longer need the gateways and other resources created in this tutorial, delete the resource group and all of the resources it contains:
-1. Enter *TestRG1* in the **Search** box at the top of the portal. When you see **TestRG1** in the search results, select it.
+1. Enter *myResourceGroup* in the **Search** box at the top of the portal. When you see **myResourceGroup** in the search results, select it.
2. Select **Delete resource group**.
-3. Enter *TestRG1* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
+3. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**.
## Next steps
-In this tutorial, you learned how to diagnose a problem with a virtual network gateway. You may want to log network communication to and from a VM so that you can review the log for anomalies. To learn how, advance to the next tutorial.
+In this tutorial, you learned how to diagnose a connectivity problem between two connected virtual networks via VPN gateways. For more information about connecting virtual networks using VPN gateways, see [VNet-to-VNet connections](../../articles/vpn-gateway/design.md#V2V).
+
+To learn how to log network communication to and from a virtual machine so that you can review the log for anomalies, advance to the next tutorial.
> [!div class="nextstepaction"] > [Log network traffic to and from a VM](network-watcher-nsg-flow-logging-portal.md)
network-watcher Network Watcher Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-overview.md
Previously updated : 02/15/2023 Last updated : 02/23/2023
The following list is the values returned with the troubleshoot API:
* **actionUri** - This value provides the URI to documentation on how to act. * **actionUriText** - This value is a short description of the action text.
-The following tables show the different fault types (ID under results from the preceding list) that are available and if the fault creates logs.
+The following tables show the different fault types (**id** under results from the preceding list) that are available and if the fault creates logs.
### Gateway
Elapsed Time 330 sec
## Next steps
-To learn how to diagnose a problem with a virtual network gateway or gateway connection, see [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md).
+To learn how to diagnose a problem with a virtual network gateway or gateway connection, see [Diagnose communication problems between virtual networks](diagnose-communication-problem-between-networks.md).
++
openshift Howto Service Principal Credential Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-service-principal-credential-rotation.md
To check the expiration date of service principal credentials run the following:
# Service principal expiry in ISO 8601 UTC format SP_ID=$(az aro show --name MyManagedCluster --resource-group MyResourceGroup \ --query servicePrincipalProfile.clientId -o tsv)
-az ad app credential list --id $SP_ID --query "[].endDate" -o tsv
+az ad app credential list --id $SP_ID --query "[].endDateTime" -o tsv
``` If the service principal credentials are expired please update using one of the two credential rotation methods.
orbital Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview.md
Title: Why use Azure Orbital?
-description: Azure Orbital is a cloud-based ground station as a Service that allows you to streamline your operations by ingesting space data directly into Azure.
+ Title: Why use Azure Orbital Ground Station?
+description: Azure Orbital Ground Station is a cloud-based ground station as a service that allows you to streamline your operations by ingesting space data directly into Azure.
# Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
-# Why use Azure Orbital?
+# Why use Azure Orbital Ground Station?
-Azure Orbital is a fully managed cloud-based ground station as a service that allows you to streamline your operations by ingesting space data directly into Azure.
+Azure Orbital Ground Station is a fully managed cloud-based ground station as a service that allows you to streamline your operations by ingesting space data directly into Azure.
-With Azure Orbital, you can focus on your missions by off-loading the responsibility for deployment and maintenance of ground stations.
+With Azure Orbital Ground Station, you can focus on your missions by off-loading the responsibility for deployment and maintenance of ground stations.
-Azure Orbital uses MicrosoftΓÇÖs global infrastructure and low-latency global network along with an expansive partner ecosystem of ground station networks, cloud modems, and "Telemetry, Tracking, & Control" (TT&C) functions.
+Azure Orbital Ground Station uses MicrosoftΓÇÖs global infrastructure and low-latency global network along with an expansive partner ecosystem of ground station networks, cloud modems, and "Telemetry, Tracking, & Control" (TT&C) functions.
:::image type="content" source="./media/orbital-all-overview.png" alt-text="Azure Orbital Overview":::
-Azure Orbital offers two main
-
-## Azure Orbital Earth Observation
+## Earth Observation with Azure Orbital Ground Station
Schedule contacts with satellites on a pay-as-you-go basis to ingest data from the satellite, monitor the satellite health and status, or transmit commands to the satellite. Incoming data is delivered to your private virtual network allowing it to be processed or stored in Azure. The fully digitized service allows you to use software modems from Kratos and Amergint to do the modulation / demodulation, and encoding / decoding functions to recover the data.
- For a full end-to-end solution to manage fleet operations and "Telemetry, Tracking, & Control" (TT&C) functions, seamlessly integrate your Azure Orbital operations with Kubos Major Tom. Lower your operational costs and maximize your capabilities by using Azure Space.
+ For a full end-to-end solution to manage fleet operations and "Telemetry, Tracking, & Control" (TT&C) functions, seamlessly integrate your Azure Orbital Ground Station operations with Kubos Major Tom. Lower your operational costs and maximize your capabilities by using Azure Space.
* Spacecraft contact self-service scheduling * Direct data ingestion into Azure
Azure Orbital offers two main
* Integrated cloud modems for X and S bands and Certified cloud modems available through the Azure Marketplace * Global reach through integrated third-party networks
-## Azure Orbital Global Communications
-
- Satellite operators who provide global communication capabilities to their customers can route their traffic through the Microsoft global network.
-
- They can offer private connection to their customer's virtual network, or offer other managed services to their customers by connecting them to the operator's virtual network.
-
- In addition, all internet traffic destined to Microsoft services (including Office365, Microsoft Teams, Xbox, Azure public IPs) can be routed directly within region and without traversing an ISP. It can reduce the amount of traffic going towards the internet and provide lower latency access to these services.
-
- Operators can colocate new ground stations at Azure data centers or at Azure Edges, or inter-connect existing ground stations with the global Azure backbone.
-
- Azure Orbital delivers the traffic from an Orbital ground station to your virtual network, enabling you to bundle and provide managed security and connectivity services to your end-customers.
-
- * Routing over global Microsoft network
- * Internet breakout at the edge
- * Traffic delivery to providerΓÇÖs virtual network
- * Service chain other Azure services to provide managed services
- * Private connection to customer's virtual network
- ## Next steps - [Register Spacecraft](register-spacecraft.md)
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
Sign in to the [Azure portal](https://aka.ms/orbital/portal).
> TLE stands for Two-Line Element. > > Spacecraft resources can be created in any Azure region with a Microsoft ground station and schedule contacts on any ground station. Current eligible regions are West US 2, Sweden Central, and Southeast Asia.
- >
+ >
> Be sure to update this TLE value before you schedule a contact. A TLE that's more than two weeks old might result in an unsuccessful downlink. :::image type="content" source="media/orbital-eos-register-bird.png" alt-text="Register Spacecraft Resource Page" lightbox="media/orbital-eos-register-bird.png":::
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
Title: "Migrate from Single Server to Flexible Server by using the Azure portal"-
-description: Learn about migrating your Single Server databases to Azure database for PostgreSQL Flexible Server by using the Azure portal.
+ Title: "Tutorial: Migrate Azure Database for PostgreSQL - Single Server to Flexible Server using the Azure portal"
+
+description: "Learn about migrating your Single Server databases to Azure Database for PostgreSQL Flexible Server by using the Azure portal."
- Previously updated : 05/09/2022+ Last updated : 02/02/2023+
-# Migrate from Single Server to Flexible Server by using the Azure portal
+# Tutorial: Migrate Azure Database for PostgreSQL - Single Server to Flexible Server by using the Azure portal
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)]
-This article shows you how to use the migration tool in the Azure portal to migrate databases from Azure Database for PostgreSQL Single Server to Flexible Server.
+You can migrate an instance of Azure Database for PostgreSQL ΓÇô Single Server to Azure Database for PostgreSQL ΓÇô Flexible Server by using the Azure portal. In this tutorial, we perform migration of a sample database from an Azure Database for PostgreSQL single server to a PostgreSQL flexible server using the Azure portal.
>[!NOTE] > The migration tool is in public preview.
-## Getting started
+In this tutorial, you learn to:
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
+> [!div class="checklist"]
+>
+> * Configure your Azure Database for PostgreSQL Flexible Server
+> * Configure the migration task
+> * Monitor the migration
+> * Cancel the migration
+> * Migration best practices
-2. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#migration-prerequisites). It is very important to complete the prerequisite steps before you initiate a migration using this tool.
+## Configure your Azure Database for PostgreSQL Flexible Server
+
+> [!IMPORTANT]
+> To provide the best migration experience, performing migration using a burstable SKU of Flexible server is not supported. Please use a general purpose or a memory optimized SKU (4 VCore or higher) as your Target Flexible server to perform the migration. Once the migration is complete, you can downscale to a burstable instance if necessary.
+
+1. Create the target flexible server. For guided steps, refer to the quickstart [Create an Azure Database for PostgreSQL flexible server using the Portal](../flexible-server/quickstart-create-server-portal.md)
+
+2. Allowlist all required extensions as shown in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#allow-list-required-extensions). It is important to allowlist the extensions before you initiate a migration using this tool.
## Configure the migration task
The migration tool comes with a simple, wizard-based experience on the Azure por
1. Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
-2. Go to your Azure Database for PostgreSQL Flexible Server target. If you haven't created a Flexible Server target, [create one now](../flexible-server/quickstart-create-server-portal.md).
+2. Go to your Azure Database for PostgreSQL Flexible Server target.
3. In the **Overview** tab of the Flexible Server, on the left menu, scroll down to **Migration (preview)** and select it.
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png" alt-text="Screenshot of the details belonging to Migration tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/azure-portal-overview-page.png" alt-text="Screenshot of the Overview page." lightbox="./media/concepts-single-to-flexible/azure-portal-overview-page.png":::
4. Select the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you're using the migration tool, an empty grid appears with a prompt to begin your first migration. :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of the Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
- If you've already created migrations to your Flexible Server target, the grid is populated with information about migrations that were attempted to this target from the Single Server(s).
+ If you've already created migrations to your Flexible Server target, the grid contains information about migrations that were attempted to this target from the Single Server(s).
-5. Select the **Migrate from Single Server** button. You'll go through a wizard-based series of tabs to create a migration to this Flexible Server target from any Single Server.
+5. Select the **Migrate from Single Server** button. You go through a wizard-based series of tabs to create a migration into this Flexible Server target from any source Single Server.
Alternatively, you can initiate the migration process from the Azure Database for PostgreSQL Single Server. 1. Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
-2. Upon selecting the Single Server, you can observe the **Migrate your PostgreSQL single server to a fully managed PostgreSQL flexible server. Flexible server provides more granular control, flexibility and better cost optimization. Migrate now.** banner in the Overview tab. Click on **Migrate now** to get started.
+2. Upon selecting the Single Server, you can observe a migration-related banner in the Overview tab. Select **Migrate now** to get started.
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-initiate-migrate-from-single-server.png" alt-text="Screenshot to initiate migration from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-initiate-migrate-from-single-server.png":::
-3. You will be taken to a page with two options. If you have already created a Flexible Server and want to use that as the target, choose **Select existing**, and select the corresponding Subscription, Resource group and Server name details. Once the selections have been made, click on **Go to Migration wizard** and skip to the instructions under the **Setup tab** section in this page.
+3. You're taken to a page with two options. If you've already created a Flexible Server and want to use that as the target, choose **Select existing**, and select the corresponding Subscription, Resource group and Server name details. Once the selections are made, select **Go to Migration wizard** and skip to the instructions under the **Setup tab** section in this page.
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-choose-between-flexible-server.png" alt-text="Screenshot to choose existing flexible server option." lightbox="./media/concepts-single-to-flexible/single-to-flex-choose-between-flexible-server.png":::
-4. Should you choose to Create a new Flexible Server, select **Create new** and click on **Go to Create Wizard**. This action will take you through the Flexible Server creation process and deploy the Flexible Server.
+4. Should you choose to Create a new Flexible Server, select **Create new** and select **Go to Create Wizard**. This action takes you through the Flexible Server creation process and deploys the Flexible Server.
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-create-new.png" alt-text="Screenshot to choose new flexible server option." lightbox="./media/concepts-single-to-flexible/single-to-flex-create-new.png":::
-5. Once the Flexible Server is deployed, select to open the Flexible Server menu. On the left panel, scroll down to **Migration (preview)** and select it.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png" alt-text="Screenshot of the details related to Migration tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png":::
-
-6. Select the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you're using the migration tool, an empty grid appears with a prompt to begin your first migration.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of the Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
-
- If you've already created migrations to your Flexible Server target, the grid is populated with information about migrations that were attempted to this target from Single Server sources.
-
-7. Select the **Migrate from Single Server** button. You'll go through a wizard-based series of tabs to create a migration to this Flexible Server from any Single Server.
+After deploying the Flexible Server, follow the steps 3 to 5 under [Configure the migration task](#configure-the-migration-task)
### Setup tab
-The first tab is **Setup**. It has basic information about the migration and the list of prerequisites for getting started with migrations. These prerequisites are the same as the ones listed in the [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#migration-prerequisites) article.
+The first tab is **Setup**. Just in case you missed it, allowlist all required extensions as shown in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#allow-list-required-extensions). It is important to allowlist the extensions before you initiate a migration using this tool.
-**Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and does not accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name.
-
-**Migration resource group** is where the migration tool will create all the migration-related components. By default, the resource group of the Flexible Server target and all the components will be cleaned up automatically after the migration finishes. If you want to create a temporary resource group for the migration, create it and then select it from the dropdown list.
-
-For **Azure Active Directory App**, click the **select** option and choose the Azure Active Directory app that you created for the prerequisite step. Then, in the **Azure Active Directory Client Secret** box, paste the client secret that was generated for that app.
-
+**Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and doesn't accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name.
Select the **Next** button. ### Source tab
-The **Source** tab prompts you to give details related to the Single Server that databases need to be migrated from.
+The **Source** tab prompts you to give details related to the Single Server that is the source of the databases.
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-source.png" alt-text="Screenshot of source database server details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-source.png":::
-After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Servers under that resource group across regions. Select the source that you want to migrate databases from. We recommend that you migrate databases from a Single Server to a target Flexible Server in the same region.
+After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Servers under that resource group across regions. Select the source that you want to migrate databases from. Note that you can migrate databases from a Single Server to a target Flexible Server in the same region - cross region migrations aren't supported.
-After you choose the Single Server source, the **Location**, **PostgreSQL version**, and **Server admin login name** boxes are automatically populated. The server admin login name is the admin username that was used to create the Single Server. In the **Password** box, enter the password for that admin login name. It will enable the migration tool to log in to the Single Server to initiate the dump and migration.
+After you choose the Single Server source, the **Location**, **PostgreSQL version**, and **Server admin login name** boxes are populated automatically. The server admin login name is the admin username used to create the Single Server. In the **Password** box, enter the password for that admin user. The migration tool performs the migration of single server databases as the admin user.
-Under **Choose databases to migrate**, there's a list of user databases inside the Single Server. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations by using the same experience between the source and target servers.
+Under **Choose databases to migrate**, there's a list of user databases inside the Single Server. You can select and migrate up to eight databases in a single migration attempt. If there are more than eight user databases, the migration process is repeated between the source and target servers for the next set of databases.
-The final property on the **Source** tab is **Migration mode**. The migration tool offers online and offline modes of migration. The [concepts article](./concepts-single-to-flexible.md) talks more about the migration modes and their differences. After you choose the migration mode, the restrictions that are associated with that mode appear.
+The final property on the **Source** tab is **Migration mode**. The migration tool offers offline mode of migration as default.
-When you're finished filling out all the fields, select the **Next** button.
+After filling out all the fields, select the **Next** button.
### Target tab
The **Target** tab displays metadata for the Flexible Server target, like subscr
:::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-target.png" alt-text="Screenshot of target database server details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-target.png":::
-For **Server admin login name**, the tab displays the admin username that was used during the creation of the Flexible Server target. Enter the corresponding password for the admin user. This password is required for the migration tool to log in to the Flexible Server target and perform restore operations.
+For **Server admin login name**, the tab displays the admin username used during the creation of the Flexible Server target. Enter the corresponding password for the admin user.
For **Authorize DB overwrite**: -- If you select **Yes**, you give this migration service permission to overwrite existing data if a database that's being migrated to Flexible Server is already present.-- If you select **No**, the migration service goes into a waiting state and asks you for permission to either overwrite the data or cancel the migration.
+- If you select **Yes**, you give this migration tool permission to overwrite existing data if the database is already present.
+- If you select **No**, the migration tool does not overwrite the data for the database that is already present.
Select the **Next** button.
-### Networking tab
-
-The content on the **Networking** tab depends on the networking topology of your source and target servers. If both source and target servers are in public access, the following message appears.
--
-In this case, you don't need to do anything and can select the **Next** button.
-
-If either the source and/or target server is configured in private access, the content of the **Networking** tab is different.
--
-Let's try to understand what private access means for Single Server and Flexible Server:
--- **Single Server Private Access**: **Deny public network access** is set to **Yes**, and a private endpoint is configured.-- **Flexible Server Private Access**: A Flexible Server target is deployed inside a virtual network.-
-For private access, all the fields are automatically populated with subnet details. This is the subnet in which the migration tool will deploy Azure Database Migration Service to move data between the source and the target.
-
-You can use the suggested subnet or choose a different one. But make sure that the selected subnet can connect to both the source and target servers.
-
-After you choose a subnet, select the **Next** button.
- ### Review + create tab >[!NOTE]
-> Gentle reminder to complete the [prerequisites](./concepts-single-to-flexible.md#migration-prerequisites) before you click **Create** in case it is not yet complete.
+> Gentle reminder to allowlist the [extensions](./concepts-single-to-flexible.md#allow-list-required-extensions) before you select **Create** in case it is not yet complete.
The **Review + create** tab summarizes all the details for creating the migration. Review the details and select the **Create** button to start the migration. :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-review.png" alt-text="Screenshot of details to review for the migration." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-review.png":::
-## Monitor migrations
+## Monitor the migration
-After you select the **Create** button, a notification appears in a few seconds to say that the migration was successfully created.
+After you select the **Create** button, a notification appears in a few seconds to say that the migration creation is successful. You are redirected automatically to the **Migration (Preview)** page of Flexible Server. That page has a new entry for the recently created migration.
-You should automatically be redirected to the **Migration (Preview)** page of Flexible Server. That page has a new entry for the recently created migration.
--
-The grid that displays the migrations has these columns: **Name**, **Status**, **Source DB server**, **Resource group**, **Region**, **Version**, **Databases**, and **Start time**. By default, the grid shows the list of migrations in descending order of migration start times. In other words, recent migrations appear on top of the grid.
+The grid that displays the migrations has these columns: **Name**, **Status**, **Source DB server**, **Resource group**, **Region**, **Databases**, and **Start time**. The migrations are in the descending order of migration start time with the most recent migration on top.
You can use the refresh button to refresh the status of the migrations.- You can also select the migration name in the grid to see the details of that migration. -
-As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate. The reason is that it takes time to create and deploy Database Migration Service, add the IP address on the firewall list of source and target servers, and perform maintenance tasks.
-After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place. The time that the **Migrating Data** substate takes to finish depends on the size of databases that you're migrating.
+As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes 2-3 minutes for the migration workflow to set up the migration infrastructure and network connections.
-When you select each of the databases that are being migrated, a fan-out pane appears. It has all the migration details, such as table count, incremental inserts, deletes, and pending bytes.
+After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** when the Cloning/Copying of the databases takes place. The time for migration to complete depends on the size and shape of databases that you are migrating. If the data is mostly evenly distributed across all the tables, the migration is quick. Skewed table sizes take a relatively longer time.
-For offline mode, the migration moves to the **Succeeded** state as soon as the **Migrating Data** state finishes successfully. If there's an issue at the **Migrating Data** state, the migration moves into a **Failed** state.
+When you select each of the databases in migration, a fan-out pane appears. It has all the table count - copied, queued, copying and errors apart from the database migration status.
-For online mode, the migration moves to the **WaitingForUserAction** state and the **WaitingForCutOver** substate after the **Migrating Data** substate finishes successfully.
+The migration moves to the **Succeeded** state as soon as the **Migrating Data** state finishes successfully. If there's an issue at the **Migrating Data** state, the migration moves into a **Failed** state.
-Select the migration name to open the migration details page. There, you should see the substate of **WaitingForCutover**.
+Once the migration moves to the **Succeeded** state, migration of schema and data from your Single Server to your Flexible Server target is complete. You can use the refresh button on the page to confirm the same.
-At this stage, the ongoing writes at your Single Server are replicated to the Flexible Server via the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source.
-You can monitor the replication lag by selecting each database that's being migrated. That opens a fan-out pane with metrics. The value of the **Pending Bytes** metric should be nearing zero over time. After it reaches a few megabytes for all the databases, stop any further writes to your Single Server and wait until the metric reaches 0. Then, validate the data and schemas on your Flexible Server target to make sure that they match exactly with the source server.
-
-After you complete the preceding steps, select the **Cutover** button. The following message appears.
--
-Select the **Yes** button to start cutover.
-
-A few seconds after you start cutover, the following notification appears.
--
-When the cutover is complete, the migration moves to the **Succeeded** state. Migration of schema and data from your Single Server to your Flexible Server target is now complete. You can use the refresh button on the page to check if the cutover was successful.
-
-After you complete the preceding steps, you can change your application code to point database connection strings to Flexible Server. You can then start using the target as the primary database server.
+After the migration has moved to the **Succeeded** state, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#post-migration).
Possible migration states include: -- **InProgress**: The migration infrastructure is being set up, or the actual data migration is in progress.-- **Canceled**: The migration has been canceled or deleted.
+- **InProgress**: The migration infrastructure setup is underway, or the actual data migration is in progress.
+- **Canceled**: The migration is canceled or deleted.
- **Failed**: The migration has failed. - **Succeeded**: The migration has succeeded and is complete.-- **WaitingForUserAction**: The migration is waiting for a user action. Possible migration substates include: -- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration.-- **MigratingData**: Data is being migrated.-- **CompletingMigration**: Migration cutover is in progress.-- **WaitingForLogicalReplicationSetupRequestOnSourceDB**: Waiting for logical replication enablement.-- **WaitingForCutoverTrigger**: Migration is ready for cutover.-- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite. Data is present in the target server that you're migrating into.-- **Completed**: Cutover was successful, and migration is complete.
+- **PerformingPreRequisiteSteps**: Infrastructure set up is underway for data migration.
+- **MigratingData**: Data migration is in progress.
+- **CompletingMigration**: Migration is in final stages of completion.
+- **Completed**: Migration has successfully completed.
-## Cancel migrations
+## Cancel the migration
-You have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in the **InProgress** or **WaitingForUserAction** state. You can't cancel a migration that's in the **Succeeded** or **Failed** state.
+You can cancel any ongoing migrations. To cancel a migration, it must be in the **InProgress** state. You can't cancel a migration that's in the **Succeeded** or **Failed** state.
You can choose multiple ongoing migrations at once and cancel them.
+Cancelling a migration stops further migration activity on your target server. It doesn't drop or roll back any changes on your target server from the migration attempts. Be sure to drop the databases on your target server involved in a canceled migration.
-
-Canceling a migration stops further migration activity on your target server. It doesn't drop or roll back any changes on your target server from the migration attempts. Be sure to drop the databases involved in a canceled migration on your target server.
-
-## Next steps
+## Migration best practices
-Follow the [post-migration steps](./concepts-single-to-flexible.md) for a successful end-to-end migration.
+For a successful end-to-end migration, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#post-migration). After you complete the preceding steps, you can change your application code to point database connection strings to Flexible Server. You can then start using the target as the primary database server.
postgresql How To Set Up Azure Ad App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-set-up-azure-ad-app-portal.md
- Title: "Set up an Azure AD app to use with Single Server to Flexible Server migration"-
-description: Learn about setting up an Azure AD app to be used with the feature that migrates from Single Server to Flexible Server.
---- Previously updated : 05/09/2022--
-# Set up an Azure AD app to use with migration from Single Server to Flexible Server
--
-This article shows you how to set up an [Azure Active Directory (Azure AD) app](../../active-directory/develop/howto-create-service-principal-portal.md) to use with a migration from Azure Database for PostgreSQL Single Server to Flexible Server.
-
-An Azure AD app helps with role-based access control (RBAC). The migration infrastructure requires access to both the source and target servers, and it's restricted by the roles assigned to the Azure AD app. After you create the Azure AD app, you can use it to manage multiple migrations.
-
-## Create an Azure AD app
-
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
-2. In the Azure portal, enter **Azure Active Directory** in the search box.
-3. On the page for Azure Active Directory, under **Manage** on the left, select **App registrations**.
-4. Select **New registration**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png" alt-text="Screenshot that shows selections for creating a new registration for an Azure Active Directory app." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png":::
-
-5. Give the app registration a name, choose an option that suits your needs for account types, and then select **Register**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png" alt-text="Screenshot that shows selections for naming and registering an Azure Active Directory app." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png":::
-
-6. After the app is created, copy the client ID and tenant ID and store them. You'll need them for later steps in the migration. Then, select **Add a certificate or secret**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png" alt-text="Screenshot that shows essential information about an Azure Active Directory app, along with the button for adding a certificate or secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png":::
-
-7. For **Certificates & Secrets**, on the **Client secrets** tab, select **New client secret**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png" alt-text="Screenshot that shows the button for creating a new client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png":::
-
-8. On the fan-out pane, add a description, and then use the drop-down list to select the life span of your Azure AD app.
-
- After all the migrations are complete, you can delete the Azure AD app that you created for RBAC. The default option is **6 months**. If you don't need the Azure AD app for six months, select **3 months**. Then select **Add**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png" alt-text="Screenshot that shows adding a description and selecting a life span for a client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png":::
-
-9. In the **Value** column, copy the Azure AD app secret. You can copy the secret only during creation. If you miss this step, you'll need to delete the secret and create another one for future tries.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png" alt-text="Screenshot that displays copying of a client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png":::
-
-## Add contributor privileges to an Azure resource
-
-After you create the Azure AD app, you need to add contributor privileges for it to the following resources.
-
-| Resource | Type | Description |
-| - | - | - |
-| Single Server | Required | Single Server source that you're migrating from. |
-| Flexible Server | Required | Flexible Server target that you're migrating into. |
-| Azure resource group | Required | Resource group for the migration. By default, this is the resource group for the Flexible Server target. If you're using a temporary resource group to create the migration infrastructure, the Azure AD app will require contributor privileges to this resource group. |
-| Virtual network | Required (if used) | If the source or the target has private access, the Azure AD app will require contributor privileges to the corresponding virtual network. If you're using public access, you can skip this step. |
-
-The following steps add contributor privileges to a Flexible Server target. Repeat the steps for the Single Server source, resource group, and virtual network (if used).
-
-1. In the Azure portal, select the Flexible Server target. Then select **Access Control (IAM)** on the upper left.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png" alt-text="Screenshot of the Access Control I A M page." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png":::
-
-2. Select **Add** > **Add role assignment**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png" alt-text="Screenshot that shows selections for adding a role assignment." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png":::
-
- > [!NOTE]
- > The **Add role assignment** capability is enabled only for users in the subscription who have a role type of **Owners**. Users who have other roles don't have permission to add role assignments.
-
-3. On the **Role** tab, select **Contributor** > **Next**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png" alt-text="Screenshot of the selections for choosing the contributor role." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png":::
-
-4. On the **Members** tab, keep the default option of **User, group, or service principal** for **Assign access to**. Click **Select Members**, search for your Azure AD app, and then click **Select**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png" alt-text="Screenshot of the Members tab to be added as Contributor." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png":::
-
-
-## Next steps
--- [Single Server to Flexible Server migration concepts](./concepts-single-to-flexible.md)-- [Migrate from Single Server to Flexible Server by using the Azure portal](./how-to-migrate-single-to-flexible-portal.md)-- [Migrate from Single Server to Flexible server by using the Azure CLI](./how-to-migrate-single-to-flexible-cli.md)
purview Concept Best Practices Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-scanning.md
Previously updated : 09/14/2022 Last updated : 02/23/2023
After you register your source in the relevant [collection](./how-to-create-and-
- **Scan type and schedule** - The scan process can be configured to run full or incremental scans. - Run the scans during non-business or off-peak hours to avoid any processing overload on the source.
+ - **start recurrence at** must be at least 1 minute lesser than the **schedule scan time**, otherwise, the scan will be triggered in next recurrence.
- Initial scan is a full scan, and every subsequent scan is incremental. Subsequent scans can be scheduled as periodic incremental scans. - The frequency of scans should align with the change management schedule of the data source or business requirements. For example: - If the source structure could potentially change weekly, the scan frequency should be in sync. Changes include new assets or fields within an asset that are added, modified, or deleted.
purview Scan Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/scan-data-sources.md
Previously updated : 01/25/2023 Last updated : 02/23/2023 # Scan data sources in Microsoft Purview
In the steps below we'll be using [Azure Blob Storage](register-scan-azure-blob-
:::image type="content" source="media/scan-data-sources/register-blob-scan-rule-set.png" alt-text="Screenshot of the select a scan rule set page with the default set selected."::: 1. Choose your scan trigger. You can set up a schedule (monthly or weekly) or run the scan once.
+ >[!NOTE]
+ > **start recurrence at** must be at least 1 minute lesser than the **schedule scan time**, otherwise, the scan will be triggered in next recurrence.
:::image type="content" source="media/scan-data-sources/register-blob-scan-trigger.png" alt-text="Screenshot of the set a scan trigger page showing a recurring monthly schedule.":::
resource-mover Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/overview.md
Title: What is Azure Resource Mover?
description: Learn about Azure Resource Mover -+ Last updated 12/23/2022
resource-mover Tutorial Move Region Encrypted Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-encrypted-virtual-machines.md
Title: Move encrypted Azure VMs across regions by using Azure Resource Mover
description: Learn how to move encrypted Azure VMs to another region by using Azure Resource Mover. -+ Last updated 12/21/2022
resource-mover Tutorial Move Region Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-virtual-machines.md
Title: Move Azure VMs across regions with Azure Resource Mover
description: Learn how to move Azure VMs to another region with Azure Resource Mover -+ Last updated 12/21/2022
route-server Anycast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/anycast.md
Title: Propagating anycast routes to on-premises+ description: Learn about advertising the same route from different regions with Azure Route Server. Previously updated : 02/03/2022 Last updated : 02/23/2023 -+ # Anycast routing with Azure Route Server
-Although spreading an application across Availability Zones in a single Azure region will result in a higher availability, often times applications need to be deployed in multiple regions, either to achieve a higher resiliency, a better performance for users across the globe, or better business continuity. There are different approaches that can be taken to direct users to one of the locations where a multi-region application is deployed to: DNS-based approaches such as [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) or routing-based services like [Azure Front Door](../frontdoor/front-door-overview.md) or the [Azure Cross-Regional Load Balancer](../load-balancer/cross-region-overview.md).
+You can deploy your application across [Availability Zones](../reliability/availability-zones-overview.md) in a single Azure region to achieve higher availability, but sometimes, you may need to deploy your applications in multiple regions, either to achieve a higher resiliency, a better performance for users across the globe, or better business continuity. There are different approaches that can be taken to direct users to one of the locations where a multi-region application is deployed to: DNS-based approaches such as [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md), routing-based services like [Azure Front Door](../frontdoor/front-door-overview.md), or the [Azure cross-region Load Balancer](../load-balancer/cross-region-overview.md).
-The aforementioned services are recommended for getting users to the best application location over the public Internet using public IP addressing, but they don't support private networks and IP addresses. This article will explore the usage of a route-based approach (IP anycast) to provide multi-regional, private-networked application deployments.
+The previous Azure services are recommended for getting users to the best application location over the public internet using public IP addressing, but they don't support private networks and IP addresses. This article explores the usage of a route-based approach (IP anycast) to provide multi-regional, private-networked application deployments.
-IP anycast essentially consists of advertising exactly the same IP address from more than one location, so that packets from the application users are routed to the closest (in terms of routing) region. Providing multi-region reachability over anycast offers some advantages over DNS-based approaches, such as not having to rely on clients not caching their DNS answers, and not requiring to modify the DNS design for the application.
+IP anycast essentially consists of advertising exactly the same IP address from more than one location, so that packets from the application users are routed to the closest region (in terms of routing). Providing multi-region reachability over anycast offers some advantages over DNS-based approaches, such as not having to rely on clients not caching their DNS answers, and not requiring to modify the DNS design for the application.
## Topology
-In the design covered in this scenario, the same IP address will be advertised from VNets in different Azure regions, where NVAs will advertise the application's IP address through Azure Route Server. The following diagram depicts two simple hub and spoke topologies, each in a different Azure region. A Network Virtual Appliance (NVA) in each region advertises the same route (`a.b.c.d/32` in this example, it could be any prefix that ideally does not overlap with the Azure and on-premises networks) to its local Azure Route Server. The routes will be further propagated to the on-premises network through ExpressRoute. When application users want to access the application from on-premises, the DNS infrastructure (not covered by this document) will resolve the DNS name of the application to the anycast IP address (`a.b.c.d` in the diagram), which the on-premises network devices will route to one of the two regions.
+In the design of this scenario, the same IP address is advertised from virtual networks in different Azure regions, where network virtual appliances (NVAs) advertise the application's IP address through Azure Route Server. The following diagram depicts two simple hub and spoke topologies, each in a different Azure region. An NVA in each region advertises the same route (`a.b.c.d/32` in this example) to its local Azure Route Server (the route prefix must not overlap with Azure and on-premises networks). The routes are further propagated to the on-premises network through ExpressRoute. When application users want to access the application from on-premises, the DNS infrastructure (not covered by this document) resolves the DNS name of the application to the anycast IP address (`a.b.c.d`), which the on-premises network devices route to one of the two regions.
-The decision of which of the available regions is selected is entirely based on routing attributes. If the routes from both regions are identical, the on-premises network will typically use Equal Cost MultiPathing (ECMP) to send each application flow to each region. It is possible as well to modify the advertisements generated by each NVA in Azure to make one of the regions preferred, for example with BGP AS Path prepending, establishing a deterministic path from on-premises to the azure workload.
+The decision of which of the available regions is selected is entirely based on routing attributes. If the routes from both regions are identical, the on-premises network typically uses equal-cost multi-path (ECMP) routing to send each application flow to each region. It's possible as well to modify the advertisements generated by each NVA in Azure to make one of the regions preferred. For example, using BGP AS Path prepending to establish a deterministic path from on-premises to the Azure workload.
> [!IMPORTANT] > The NVAs advertising the routes should include some health check mechanism to stop advertising the route when the application is not available in their respective regions, to avoid blackholing traffic. ## Return traffic
-When the application traffic from the on-premises client arrives to one of the NVAs in Azure, the NVA will either reverse-proxy the connection or perform Destination Network Address Translation (DNAT). It will then send the packets to the actual application, which will typically reside in a spoke VNet peered to the hub VNet where the NVA is deployed. Traffic back from the application needs to go back through the NVA, which would happen naturally if the NVA is reverse-proxying the connection (or performs Source NAT additionally to Destination NAT).
+When the application traffic from the on-premises client arrives to one of the NVAs in Azure, the NVA will either reverse-proxy the connection or perform Destination Network Address Translation (DNAT). Then, it sends the packets to the actual application, which typically resides in a spoke virtual network peered to the hub virtual network where the NVA is deployed. Traffic back from the application goes back through the NVA, which would happen naturally if the NVA is reverse-proxying the connection (or performs Source NAT additionally to Destination NAT).
-Otherwise, traffic arriving to the application will still be sourced from the original on-premises client's IP address. In this case, packets can be routed back to the NVA with User-Defined Routes. Special care needs to be taken if there are more than one NVA instance in each region, since traffic could be asymmetric (the inbound and outbound traffic going through different NVA instances). Asymmetric traffic is typically not an issue if NVAs are stateless, but it will result in errors if the NVAs instead need to keep track of connection states, such as firewalls.
+Otherwise, traffic arriving to the application will still be sourced from the original on-premises client's IP address. In this case, packets can be routed back to the NVA with user-defined routes (UDRs). Special care must be taken if there are more than one NVA instance in each region, since traffic could be asymmetric (the inbound and outbound traffic going through different NVA instances). Asymmetric traffic is typically not an issue if NVAs are stateless, but it results in errors if NVAs keep track of connection states, such as firewalls.
## Next steps * [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md)
-* [Learn how Azure Route Server works with a network virtual appliance](resource-manager-template-samples.md)
+* [Learn how to peer Azure Route Server with a network virtual appliance (NVA)](tutorial-configure-route-server-with-quagga.md)
route-server Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/resource-manager-template-samples.md
Title: Resource Manager template samples
-description: Information about sample Azure Resource Manager templates provided for Azure Route Server.
+description: Get started with Azure Route Server using an Azure Resource Manager template sample.
Previously updated : 09/01/2021 Last updated : 02/23/2023+ # Azure Resource Manager templates for Azure Route Server
The following table includes links to Azure Resource Manager templates for Azure
| Title | Description | | | -- |
-| [Route Server and Quagga](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/route-server-quagga) | Deploy an Azure Route Server and Quagga (NVA) in a virtual network. |
+| [Route Server and Quagga NVA](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.network/route-server-quagga) | Deploy an Azure Route Server and a Quagga network virtual appliance in a virtual network. |
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md
Title: Azure Route Server frequently asked questions (FAQs)
+ Title: Azure Route Server frequently asked questions (FAQ)
description: Find answers to frequently asked questions about Azure Route Server. Previously updated : 12/06/2022 Last updated : 02/23/2023 -+
-# Azure Route Server frequently asked questions (FAQs)
+# Azure Route Server frequently asked questions (FAQ)
## What is Azure Route Server? Azure Route Server is a fully managed service that allows you to easily manage routing between your network virtual appliance (NVA) and your virtual network.
-### Is Azure Route Server just a VM?
+### Is Azure Route Server just a virtual machine?
-No. Azure Route Server is a service designed with high availability. If it's deployed in an Azure region that supports [Availability Zones](../availability-zones/az-overview.md), it will have zone-level redundancy.
+No. Azure Route Server is a service designed with high availability. Your route server has zone-level redundancy if you deploy it in an Azure region that supports [Availability Zones](../availability-zones/az-overview.md).
-### How many Azure Route Servers can I create in a virtual network?
+### How many route servers can I create in a virtual network?
-You can create only one route server in a virtual network. It must be deployed in a dedicated subnet called *RouteServerSubnet*.
+You can create only one route server in a virtual network. You must deploy the route server in a dedicated subnet called *RouteServerSubnet*.
### Does Azure Route Server support virtual network peering?
-Yes, if you peer a virtual network hosting the Azure Route Server to another virtual network and you enable Use Remote Gateway on the second virtual network, Azure Route Server will learn the address spaces of that virtual network and send them to all the peered NVAs. It will also program the routes from the NVAs into the routing table of the VMs in the peered virtual network.
+Yes, if you peer a virtual network hosting the Azure Route Server to another virtual network and you enable **Use the remote virtual network's gateway or Route Server** on the second virtual network, Azure Route Server learns the address spaces of the peered virtual network and send them to all the peered network virtual appliances (NVAs). It also programs the routes from the NVAs into the route table of the virtual machines in the peered virtual network.
### <a name = "protocol"></a>What routing protocols does Azure Route Server support?
-Azure Route Server supports Border Gateway Protocol (BGP) only. Your NVA needs to support multi-hop external BGP because youΓÇÖll need to deploy Azure Route Server in a dedicated subnet in your virtual network. The [ASN](https://en.wikipedia.org/wiki/Autonomous_system_(Internet)) you choose must be different from the one Azure Route Server uses when you configure the BGP on your NVA.
+Azure Route Server supports only Border Gateway (BGP) Protocol. Your network virtual appliance (NVA) must support multi-hop external BGP because you need to deploy the Route Server in a dedicated subnet in your virtual network. When you configure the BGP on your NVA, the ASN you choose must be different from the Route Server ASN.
### Does Azure Route Server route data traffic between my NVA and my VMs?
-No. Azure Route Server only exchanges BGP routes with your NVA. The data traffic goes directly from the NVA to the destination VM and directly from the VM to the NVA.
+No. Azure Route Server only exchanges BGP routes with your network virtual appliance (NVA). The data traffic goes directly from the NVA to the destination virtual machine (VM) and directly from the VM to the NVA.
### Does Azure Route Server store customer data?
-No. Azure Route Server only exchanges BGP routes with your NVA and then propagates them to your virtual network.
+
+No. Azure Route Server only exchanges BGP routes with your network virtual appliance (NVA) and then propagates them to your virtual network.
### Why does Azure Route Server require a public IP address?
-Azure Router Server needs to ensure connectivity to the backend service that manages the Route Server configuration, as such a public IP address is required. This public IP address doesn't constitute a security exposure of your virtual network.
+Azure Router Server needs to ensure connectivity to the backend service that manages the Route Server configuration, that's why it needs the public IP address. This public IP address doesn't constitute a security exposure of your virtual network.
### Does Azure Route Server support IPv6?
No. We'll add IPv6 support in the future.
### If Azure Route Server receives the same route from more than one NVA, how does it handle them?
-If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the VMs in the virtual network. When the VMs send traffic to the destination of this route, the VM hosts will do Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
+If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the virtual machines (VMs) in the virtual network. When a VM sends traffic to the destination of this route, the VM host uses Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
### Does Azure Route Server preserve the BGP AS Path of the route it receives? Yes, Azure Route Server propagates the route with the BGP AS Path intact.
-### Do I need to peer each NVA with both Route Server instances?
-Yes, to ensure that VNet routes are successfully advertised over the target NVA connections, and to configure High Availability, we recommend peering each NVA instances with both instances of Route Server.
+### Do I need to peer each NVA with both Azure Route Server instances?
+
+Yes, to ensure that virtual network routes are successfully advertised over the target NVA connections, and to configure High Availability, we recommend peering each NVA instance with both instances of Route Server.
### Does Azure Route Server preserve the BGP communities of the route it receives?
Yes, Azure Route Server propagates the route with the BGP communities as is.
### What is the BGP timer setting of Azure Route Server?
-The Keep-alive timer is set to 60 seconds and the Hold-down timer 180 seconds.
+Azure Route Server Keepalive timer is 60 seconds and the Hold timer is 180 seconds.
### What Autonomous System Numbers (ASNs) can I use?
-You can use your own public ASNs or private ASNs in your network virtual appliance. You can't use the ranges reserved by Azure or IANA.
-The following ASNs are reserved by Azure or IANA:
+You can use your own public ASNs or private ASNs in your network virtual appliance (NVA). You can't use ASNs reserved by Azure or IANA.
* ASNs reserved by Azure: * Public ASNs: 8074, 8075, 12076
The following ASNs are reserved by Azure or IANA:
No, Azure Route Server supports only 16-bit (2 bytes) ASNs.
-### Can I associate a User Defined Route (UDR) to the RouteServerSubnet?
+### Can I associate a UDR to the *RouteServerSubnet*?
-No, Azure Route Server doesn't support configuring a UDR on the RouteServerSubnet. It should be noted that Azure Route Server doesn't route any data traffic between NVAs and VMs.
+No, Azure Route Server doesn't support configuring a user defined route (UDR) on the *RouteServerSubnet*. Azure Route Server doesn't route any data traffic between network virtual appliances (NVAs) and virtual machines (VMs).
-### Can I associate a Network Security group (NSG) to the RouteServerSubnet?
+### Can I associate a network security group (NSG) to the RouteServerSubnet?
No, Azure Route Server doesn't support NSG association to the RouteServerSubnet.
No, Azure Route Server doesn't forward data traffic. To enable transit connectiv
### Can I use Azure Route Server to direct traffic between subnets in the same virtual network to flow inter-subnet traffic through the NVA?
-No. System routes for traffic related to virtual network, virtual network peerings, or virtual network service endpoints, are preferred routes, even if BGP routes are more specific. As Route Server uses BGP to advertise routes, currently this is not supported by design. You must continue to use UDRs to force override the routes, and you can't utilize BGP to quickly failover these routes. You must continue to use a third party solution to update the UDRs via the API in a failover situation, or use an Azure Load Balancer with HA ports mode to direct traffic.
+No. Azure Route Server uses BGP to advertise routes. System routes for traffic related to virtual network, virtual network peerings, or virtual network service endpoints, are preferred routes, even if BGP routes are more specific. You must continue to use user defined routes (UDRs) to override system routes, and you can't utilize BGP to quickly fail over these routes. You must continue to use a third-party solution to update the UDRs via the API in a failover situation, or use an Azure Load Balancer with HA ports mode to direct traffic.
-You can still use Route Server to direct traffic between subnets in different virtual networks to flow using the NVA. The only possible design that may work is one subnet per "spoke" virtual network and all virtual networks are peered to a "hub" virtual network, but this is very limiting and needs to take into scaling considerations and Azure's maximum limits on virtual networks vs subnets.
+You can still use Route Server to direct traffic between subnets in different virtual networks to flow using the NVA. A possible design that may work is one subnet per "spoke" virtual network and all "spoke" virtual networks are peered to a "hub" virtual network. This design is very limiting and needs to take into scaling considerations and Azure's maximum limits on virtual networks vs subnets.
### Can Azure Route Server filter out routes from NVAs?
-Azure Route Server supports ***NO_ADVERTISE*** BGP Community. If an NVA advertises routes with this community string to the route server, the route server won't advertise it to other peers including the ExpressRoute gateway. This feature can help reduce the number of routes to be sent from Azure Route Server to ExpressRoute.
+Azure Route Server supports ***NO_ADVERTISE*** BGP community. If a network virtual appliance (NVA) advertises routes with this community string to the route server, the route server doesn't advertise it to other peers including the ExpressRoute gateway. This feature can help reduce the number of routes sent from Azure Route Server to ExpressRoute.
+
+### Can Azure Route Server provide transit between ExpressRoute and a Point-to-Site (P2S) VPN gateway connection when enabling the *branch-to-branch*?
+
+No, Azure Route Server provides transit only between ExpressRoute and Site-to-Site (S2S) VPN gateway connections (when enabling the *branch-to-branch* setting).
-## <a name = "limitations"></a>Route Server Limits
+### <a name = "limitations"></a>What are Azure Route Server limits?
Azure Route Server has the following limits (per deployment).
sap Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/deploy-s4hana.md
description: Learn how to deploy S/4HANA infrastructure with Azure Center for SA
Previously updated : 02/03/2023 Last updated : 02/22/2023 #Customer intent: As a developer, I want to deploy S/4HANA infrastructure using Azure Center for SAP solutions so that I can manage SAP workloads in the Azure portal.
There are three deployment options that you can select for your infrastructure,
Azure Center for SAP solutions automatically configures a database disk layout for the deployment. To view the layout for a single database server, make sure to select a VM SKU. Then, select **View disk configuration**. If there's more than one database server, the layout applies to each server.
+1. Select **Next: Visualize Architecture**.
+
+1. In the **Visualize Architecture** tab, visualize the architecture of the VIS that you're deploying.
+
+ 1. To view the visualization, make sure to configure all the inputs listed on the tab.
+
+ 1. Optionally, click and drag resources or containers to move them around visually.
+
+ 1. Click on **Reset** to reset the visualization to its default state. That is, revert any changes you may have made to the position of resources or containers.
+
+ 1. Click on **Scale to fit** to reset the visualization to its default zoom level.
+
+ 1. Click on **Zoom in** to zoom into the visualization.
+
+ 1. Click on **Zoom out** to zoom out of the visualization.
+
+ 1. Click on **Download JPG** to export the visualization as a JPG file.
+
+ 1. Click on **Feedback** to share your feedback on the visualization experience.
+
+ The visualization doesn't represent all resources for the VIS that you're deploying, for instance it doesn't represent disks and NICs.
+ 1. Select **Next: Tags**. 1. Optionally, enter tags to apply to all resources created by the Azure Center for SAP solutions process. These resources include the VIS, ASCS instances, Application Server instances, Database instances, VMs, disks, and NICs.
search Index Sql Relational Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-sql-relational-data.md
Previously updated : 02/08/2023 Last updated : 02/22/2023 # How to model relational SQL data for import and indexing in Azure Cognitive Search Azure Cognitive Search accepts a flat rowset as input to the [indexing pipeline](search-what-is-an-index.md). If your source data originates from joined tables in a SQL Server relational database, this article explains how to construct the result set, and how to model a parent-child relationship in an Azure Cognitive Search index.
-As an illustration, we'll refer to a hypothetical hotels database, based on [demo data](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels). Assume the database consists of a Hotels$ table with 50 hotels, and a Rooms$ table with rooms of varying types, rates, and amenities, for a total of 750 rooms. There is a one-to-many relationship between the tables. In our approach, a view will provide the query that returns 50 rows, one row per hotel, with associated room detail embedded into each row.
+As an illustration, we refer to a hypothetical hotels database, based on [demo data](https://github.com/Azure-Samples/azure-search-sample-data/tree/master/hotels). Assume the database consists of a Hotels$ table with 50 hotels, and a Rooms$ table with rooms of varying types, rates, and amenities, for a total of 750 rooms. There's a one-to-many relationship between the tables. In our approach, a view provides the query that returns 50 rows, one row per hotel, with associated room detail embedded into each row.
![Tables and view in the Hotels database](media/index-sql-relational-data/hotels-database-tables-view.png "Tables and view in the Hotels database") ## The problem of denormalized data
-One of the challenges in working with one-to-many relationships is that standard queries built on joined tables will return denormalized data, which doesn't work well in an Azure Cognitive Search scenario. Consider the following example that joins hotels and rooms.
+One of the challenges in working with one-to-many relationships is that standard queries built on joined tables return denormalized data, which doesn't work well in an Azure Cognitive Search scenario. Consider the following example that joins hotels and rooms.
```sql SELECT * FROM Hotels$ INNER JOIN Rooms$ ON Rooms$.HotelID = Hotels$.HotelID ```+ Results from this query return all of the Hotel fields, followed by all Room fields, with preliminary hotel information repeating for each room value. ![Denormalized data, redundant hotel data when room fields are added](media/index-sql-relational-data/denormalize-data-query.png "Denormalized data, redundant hotel data when room fields are added") -
-While this query succeeds on the surface (providing all of the data in a flat row set), it fails in delivering the right document structure for the expected search experience. During indexing, Azure Cognitive Search will create one search document for each row ingested. If your search documents looked like the above results, you would have perceived duplicates - seven separate documents for the Twin Dome hotel alone. A query on "hotels in Florida" would return seven results for just the Twin Dome hotel, pushing other relevant hotels deep into the search results.
+While this query succeeds on the surface (providing all of the data in a flat row set), it fails in delivering the right document structure for the expected search experience. During indexing, Azure Cognitive Search creates one search document for each row ingested. If your search documents looked like the above results, you would have perceived duplicates - seven separate documents for the Twin Dome hotel alone. A query on "hotels in Florida" would return seven results for just the Twin Dome hotel, pushing other relevant hotels deep into the search results.
To get the expected experience of one document per hotel, you should provide a rowset at the right granularity, but with complete information. This article explains how.
To deliver the expected search experience, your data set should consist of one r
The solution is to capture the room detail as nested JSON, and then insert the JSON structure into a field in a view, as shown in the second step.
-1. Assume you have two joined tables, Hotels$ and Rooms$, that contain details for 50 hotels and 750 rooms, and are joined on the HotelID field. Individually, these tables contain 50 hotels and 750 related rooms.
+1. Assume you've two joined tables, Hotels$ and Rooms$, that contain details for 50 hotels and 750 rooms and are joined on the HotelID field. Individually, these tables contain 50 hotels and 750 related rooms.
```sql CREATE TABLE [dbo].[Hotels$](
This rowset is now ready for import into Azure Cognitive Search.
On the Azure Cognitive Search side, create an index schema that models the one-to-many relationship using nested JSON. The result set you created in the previous section generally corresponds to the index schema provided below (we cut some fields for brevity).
-The following example is similar to the example in [How to model complex data types](search-howto-complex-data-types.md#create-complex-fields). The *Rooms* structure, which has been the focus of this article, is in the fields collection of an index named *hotels*. This example also shows a complex type for *Address*, which differs from *Rooms* in that it is composed of a fixed set of items, as opposed to the multiple, arbitrary number of items allowed in a collection.
+The following example is similar to the example in [How to model complex data types](search-howto-complex-data-types.md#create-complex-fields). The *Rooms* structure, which has been the focus of this article, is in the fields collection of an index named *hotels*. This example also shows a complex type for *Address*, which differs from *Rooms* in that it's composed of a fixed set of items, as opposed to the multiple, arbitrary number of items allowed in a collection.
```json {
The following example is similar to the example in [How to model complex data ty
{ "name": "HotelName", "type": "Edm.String", "searchable": true, "filterable": false }, { "name": "Description", "type": "Edm.String", "searchable": true, "analyzer": "en.lucene" }, { "name": "Description_fr", "type": "Edm.String", "searchable": true, "analyzer": "fr.lucene" },
- { "name": "Category", "type": "Edm.String", "searchable": true, "filterable": false },
+ { "name": "Category", "type": "Edm.String", "searchable": true, "filterable": true, "facetable": true },
{ "name": "ParkingIncluded", "type": "Edm.Boolean", "filterable": true, "facetable": true },
+ { "name": "Tags", "type": "Collection(Edm.String)", "searchable": true, "filterable": true, "facetable": true },
{ "name": "Address", "type": "Edm.ComplexType", "fields": [ { "name": "StreetAddress", "type": "Edm.String", "filterable": false, "sortable": false, "facetable": false, "searchable": true },
The following example is similar to the example in [How to model complex data ty
{ "name": "Description_fr", "type": "Edm.String", "searchable": true, "analyzer": "fr.lucene" }, { "name": "Type", "type": "Edm.String", "searchable": true }, { "name": "BaseRate", "type": "Edm.Double", "filterable": true, "facetable": true },
- { "name": "BedOptions", "type": "Edm.String", "searchable": true, "filterable": true, "facetable": true },
+ { "name": "BedOptions", "type": "Edm.String", "searchable": true, "filterable": true, "facetable": false },
{ "name": "SleepsCount", "type": "Edm.Int32", "filterable": true, "facetable": true },
- { "name": "SmokingAllowed", "type": "Edm.Boolean", "filterable": true, "facetable": true },
+ { "name": "SmokingAllowed", "type": "Edm.Boolean", "filterable": true, "facetable": false},
{ "name": "Tags", "type": "Edm.Collection", "searchable": true } ] }
The following example is similar to the example in [How to model complex data ty
} ```
-Given the previous result set and the above index schema, you have all the required components for a successful indexing operation. The flattened data set meets indexing requirements yet preserves detail information. In the Azure Cognitive Search index, search results will fall easily into hotel-based entities, while preserving the context of individual rooms and their attributes.
+Given the previous result set and the above index schema, you've all the required components for a successful indexing operation. The flattened data set meets indexing requirements yet preserves detail information. In the Azure Cognitive Search index, search results will fall easily into hotel-based entities, while preserving the context of individual rooms and their attributes.
+
+## Facet behavior on complex type subfields
+
+Fields that have a parent, such as the fields under Address and Rooms, are called *subfields*. Although you can assign a "facetable" attribute to a subfield, the count of the facet will always be for the main document.
+
+For complex types like Address, where there's just one "Address/City" or "Address/stateProvince" in the document, the facet behavior works as expected. However, in the case of Rooms, where there are multiple subdocuments for each main document, the facet counts can be misleading.
+
+As noted in [Model complex types](search-howto-complex-data-types.md): "the document counts returned in the facet results are calculated for the parent document (a hotel), not the subdocuments in a complex collection (rooms). For example, suppose a hotel has 20 rooms of type "suite". Given this facet parameter facet=Rooms/Type, the facet count is one for the hotel, not 20 for the rooms."
## Next steps
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
Last updated 06/10/2022
In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Database for MySQL and makes it searchable in Azure Cognitive Search.
-This article supplements [Creating indexers in Azure Cognitive Search](search-howto-create-indexers.md) with information that's specific to indexing files in Azure DB for MySQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers:
+This article supplements [Creating indexers in Azure Cognitive Search](search-howto-create-indexers.md) with information that's specific to indexing files in Azure Database for MySQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers:
- Create a data source - Create an index
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md
Previously updated : 09/08/2022 Last updated : 02/23/2023 # Index data from SharePoint document libraries
Last updated 09/08/2022
This article explains how to configure a [search indexer](search-indexer-overview.md) to index documents stored in SharePoint document libraries for full text search in Azure Cognitive Search. Configuration steps are followed by a deeper exploration of behaviors and scenarios you're likely to encounter.
-> [!NOTE]
-> SharePoint supports a granular authorization model that determines per-user access at the document level. The SharePoint indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should investigate [security filters to trim results](search-security-trimming-for-azure-search-with-aad.md) of unauthorized content.
## Functionality An indexer in Azure Cognitive Search is a crawler that extracts searchable data and metadata from a data source. The SharePoint indexer will connect to your SharePoint site and index documents from one or more document libraries. The indexer provides the following functionality: + Index content and metadata from one or more document libraries.
-+ Incremental indexing, where the indexer identifies which files have changed and indexes only the updated content. For example, if five PDFs are originally indexed and one is updated, only the updated PDF is indexed.
++ Incremental indexing, where the indexer identifies which file content or metadata have changed and indexes only the updated data. For example, if five PDFs are originally indexed and one is updated, only the updated PDF is indexed. + Deletion detection is built in. If a document is deleted from a document library, the indexer will detect the delete on the next indexer run and remove the document from the index. + Text and normalized images will be extracted by default from the documents that are indexed. Optionally a [skillset](cognitive-search-working-with-skillsets.md) can be added to the pipeline for [AI enrichment](cognitive-search-concept-intro.md).
You can also continue indexing if errors happen at any point of processing, eith
} ```
+## Limitations and considerations
+
+These are the limitations of this feature:
+++ Indexing [SharePoint Lists](https://support.microsoft.com/office/introduction-to-lists-0a1c3ace-def0-44af-b225-cfa8d92c52d7) is not supported.+++ If a SharePoint file content and/or metadata has been indexed, renaming a SharePoint folder in its parent hierarchy is not a condition that will re-index the document.+++ Indexing SharePoint .ASPX site content is not supported.+++ [Private endpoint](search-indexer-howto-access-private.md) is not supported.+++ SharePoint supports a granular authorization model that determines per-user access at the document level. The SharePoint indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should investigate [security filters to trim results](search-security-trimming-for-azure-search-with-aad.md) of unauthorized content. ++
+These are the considerations when using this feature:
+++ If there is a requirement to implement a SharePoint content indexing solution with Cognitive Search in a production environment, consider create a custom connector using [Microsoft Graph Data Connect](/graph/data-connect-concept-overview) with [Blob indexer](search-howto-indexing-azure-blob-storage.md) and [Microsoft Graph API](/graph/use-the-api) for incremental indexing.+++ There could be Microsoft 365 processes that update SharePoint file system-metadata (based on different configurations in SharePoint) and will cause the SharePoint indexer to trigger. Make sure that you test your setup and understand the document processing count prior to using any AI enrichment. Since this is a third-party connector to Azure (since SharePoint is located in Microsoft 365), SharePoint configuration is not checked by the indexer.+++ ## See also + [Indexers in Azure Cognitive Search](search-indexer-overview.md)
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Previously updated : 02/14/2023 Last updated : 02/22/2023 # Make outbound connections through a private endpoint
When evaluating shared private links for your scenario, remember these constrain
+ An Azure PaaS resource from the following list of supported resource types, configured to run in a virtual network, with a private endpoint created through Azure Private Link. ++ You should have a minimum of Contributor permissions on both Azure Cognitive Search and the Azure PaaS resource for which you're creating the shared private link.+ <a name="group-ids"></a> ### Supported resource types
These Private Link tutorials provide steps for creating a private endpoint for A
## 1 - Create a shared private link
-Use the Azure portal, Management REST API, the Azure CLI, or Azure PowerShell to create a shared private link. Remember to use the preview API version, either `2020-08-01-preview` or `2021-04-01-preview`, if you're using a group ID that's in preview. The following resource types are in preview and require a preview API: `managedInstance`, `mySqlServer`, `sites`.
+Use the Azure portal, Management REST API, the Azure CLI, or Azure PowerShell to create a shared private link.
+
+Here are a few tips:
+++ Give the private link a meaningful name. In the Azure PaaS resource, a shared private link appears alongside other private endpoints. A name like "shared-private-link-for-search" can remind you how it's used.
-It's possible to create a shared private link for an Azure PaaS resource that doesn't have a private endpoint, but it won't work unless the [resource has a private endpoint](#private-endpoint-verification).
++ Don't skip the [private link verification](#private-endpoint-verification) step. It's possible to create a shared private link for an Azure PaaS resource that doesn't have a private endpoint. The link won't work if the resource isn't registered.
-Recall that you can't use the portal or the Azure CLI `az search` command to create a shared private link to an Azure SQL Managed Instance. See [Create a shared private link for SQL Managed Instance](#create-a-shared-private-link-for-a-sql-managed-instance) for that resource type.
++ SQL managed instance has extra requirements for creating a private link. Currently, you can't use the portal or the Azure CLI `az search` command because neither one formulates a valid URI. Instead, follow the instructions in [Create a shared private link for SQL Managed Instance](#create-a-shared-private-link-for-a-sql-managed-instance) in this article for a workaround.
-When you complete these steps, you have a shared private link that's provisioned in a pending state. The resource owner needs to approve the request before it's operational.
+When you complete these steps, you have a shared private link that's provisioned in a pending state. **It takes several minutes to create the link**. Once it's created, the resource owner needs to approve the request before it's operational.
### [**Azure portal**](#tab/portal-create)
When you complete these steps, you have a shared private link that's provisioned
### [**REST API**](#tab/rest-create)
-See [Manage with REST](search-manage-rest.md) for instructions on setting up a REST client for issuing Management REST API requests.
+> [!NOTE]
+> Preview API versions, either `2020-08-01-preview` or `2021-04-01-preview`, are required for group IDs that are in preview. The following resource types are in preview: `managedInstance`, `mySqlServer`, `sites`.
+> For `managedInstance`, see [create a shared private link for SQL Managed Instance](#create-a-shared-private-link-for-a-sql-managed-instance) for help formulating a fully qualified domain name.
-First, use [Get](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/get) to review any existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and sub-resource combination.
+While tools like Azure portal, Azure PowerShell, or the Azure CLI have built-in mechanisms for account sign-in, a REST client like Postman needs to provide a bearer token that allows your request to go through.
-```http
-GET https://https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Search/searchServices/{{service-name}}/sharedPrivateLinkResources?api-version={{api-version}}
-```
+Because it's easy and quick, this section uses Azure CLI steps for getting a bearer token. For more durable approaches, see [Manage with REST](search-manage-rest.md).
-Use [Create or Update](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/create-or-update) for the next step, providing the name of the link name on the URI, and the target Azure resource in the body of the request. The following example is for blob storage.
+1. Open a command line and run `az login` for Azure sign-in.
-```http
-PUT https://https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Search/searchServices/{{service-name}}/sharedPrivateLinkResources/{{shared-private-link-name}}?api-version={{api-version}}
-{
- "properties":
- {
- "groupID": "blob",
- "privateLinkResourceId": "/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Storage/storageAccounts/{{storage-account-name}}",
- "provisioningState": "",
- "requestMessage": "Please approve this request.",
- "resourceRegion": "",
- "status": ""
- }
-}
+1. Show the active account and subscription. Verify that this subscription is the same one that has the Azure PaaS resource for which you're creating the shared private link.
-```
+ ```azurecli
+ az account show
+ ```
-Rerun the first request to monitor the provisioning state as it transitions from updating to succeeded.
+ Change the subscription if it's not the right one:
+
+ ```azurecli
+ az account set --subscription {{Azure PaaS subscription ID}}
+ ```
+
+1. Create a bearer token, and then copy the entire token (everything between the quotation marks).
+
+ ```azurecli
+ az account get-access-token
+ ```
+
+1. Switch to a REST client and set up a [GET Shared Private Link Resource](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/get). This step allows you to review existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and sub-resource combination.
+
+ ```http
+ GET https://https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Search/searchServices/{{service-name}}/sharedPrivateLinkResources?api-version={{api-version}}
+ ```
+
+1. On the **Authorization** tab, select **Bearer Token** and then paste in the token.
+
+1. Set the content type to JSON.
+
+1. Send the request. You should get a list of all shared private link resources that exist for your search service. Make sure there's no existing shared private link for the resource and sub-resource combination.
+
+1. Formulate a PUT request to [Create or Update Shared Private Link](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/create-or-update) for the Azure PaaS resource. Provide a URI and request body similar to the following example:
+
+ ```http
+ PUT https://https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Search/searchServices/{{service-name}}/sharedPrivateLinkResources/{{shared-private-link-name}}?api-version={{api-version}}
+ {
+ "properties":
+ {
+ "groupID": "blob",
+ "privateLinkResourceId": "/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Storage/storageAccounts/{{storage-account-name}}",
+ "provisioningState": "",
+ "requestMessage": "Please approve this request.",
+ "resourceRegion": "",
+ "status": ""
+ }
+ }
+ ```
+
+1. As before, provide the bearer token and make sure the content type is JSON.
+
+ If the Azure PaaS resource is in a different subscription, use the Azure CLI to change the subscription, and then get a bearer token that is valid for that subscription:
+
+ ```azurecli
+ az account set --subscription {{Azure PaaS subscription ID}}
+
+ az account get-access-token
+ ```
+
+1. Send the request. To check the status, rerun the first GET Shared Private Link request to monitor the provisioning state as it transitions from updating to succeeded.
### [**PowerShell**](#tab/ps-create)
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/troubleshoot-shared-private-link-resources.md
Previously updated : 01/18/2023 Last updated : 02/22/2023 # Troubleshoot issues with Shared Private Links in Azure Cognitive Search
A search service initiates the request to create a shared private link, but Azur
Shared private link resources that have failed Azure Resource Manager deployment will show up in [List](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/list-by-service) and [Get](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/get) API calls, but will have a "Provisioning State" of `Failed`. Once the reason of the Azure Resource Manager deployment failure has been ascertained, delete the `Failed` resource and re-create it after applying the appropriate resolution from the following table. | Deployment failure reason | Description | Resolution |
-| | | |
-| Network resource provider not registered on target resource's subscription | A private endpoint (and associated DNS mappings) is created for the target resource (Storage Account, Azure Cosmos DB, Azure SQL) via the `Microsoft.Network` resource provider (RP). If the subscription that hosts the target resource ("target subscription") isn't registered with `Microsoft.Network` RP, then the Azure Resource Manager deployment can fail. | You need to register this RP in their target subscription. You can [register the resource provider](../azure-resource-manager/management/resource-providers-and-types.md) using the Azure portal, PowerShell, or CLI.|
+| - | -- | - |
+| "LinkedAuthorizationFailed" | The error message states that the client has permission to create the shared private link on the search service, but doesn't have permission to perform action 'privateEndpointConnectionApproval/action' on the linked scope. | Re-check the private link ID in the request to make sure there are no errors or omissions in the URI. If Azure Cognitive Search and the Azure PaaS resource are in different subscriptions, and if you're using REST or a command line interface, make sure that the [active Azure account is for the Azure PaaS resource](search-indexer-howto-access-private.md?tabs=rest-create#1create-a-shared-private-link). For REST clients, make sure you're not using an expired bearer token, and that the token is valid for the active subscription. |
+| Network resource provider not registered on target resource's subscription | A private endpoint (and associated DNS mappings) is created for the target resource (Storage Account, Azure Cosmos DB, Azure SQL) via the `Microsoft.Network` resource provider (RP). If the subscription that hosts the target resource ("target subscription") isn't registered with `Microsoft.Network` RP, then the Azure Resource Manager deployment can fail. | You need to register this RP in their target subscription. You can [register the resource provider](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider) using the Azure portal, PowerShell, or CLI.|
| Invalid `groupId` for the target resource | When Azure Cosmos DB accounts are created, you can specify the API type for the database account. While Azure Cosmos DB offers several different API types, Azure Cognitive Search only supports "Sql" as the `groupId` for shared private link resources. When a shared private link of type "Sql" is created for a `privateLinkResourceId` pointing to a non-Sql database account, the Azure Resource Manager deployment will fail because of the `groupId` mismatch. The Azure resource ID of an Azure Cosmos DB account isn't sufficient to determine the API type that is being used. Azure Cognitive Search tries to create the private endpoint, which is then denied by Azure Cosmos DB. | You should ensure that the `privateLinkResourceId` of the specified Azure Cosmos DB resource is for a database account of "Sql" API type | | Target resource not found | Existence of the target resource specified in `privateLinkResourceId` is checked only during the commencement of the Azure Resource Manager deployment. If the target resource is no longer available, then the deployment will fail. | You should ensure that the target resource is present in the specified subscription and resource group and isn't moved or deleted. |
-| Transient/other errors | The Azure Resource Manager deployment can fail if there is an infrastructure outage or because of other unexpected reasons. This should be rare and usually indicates a transient state. | Retry creating this resource at a later time. If the problem persists, reach out to Azure Support. |
+| Transient/other errors | The Azure Resource Manager deployment can fail if there's an infrastructure outage or because of other unexpected reasons. This should be rare and usually indicates a transient state. | Retry creating this resource at a later time. If the problem persists, reach out to Azure Support. |
## Issues approving the backing private endpoint
Shared private links and private endpoints are used when search service **Public
If you observe that the connectivity change operation is taking a significant amount of time, wait for a few hours. Connectivity change operations involve operations such as updating DNS records which may take longer than expected.
-If **Public Network Access** is changed, existing shared private links and private endpoints may not work correctly. If existing shared private links and private endpoints stop working during a connectivity change operation, wait a few hours for the operation to complete. If they are still not working, try deleting and recreating them.
+If **Public Network Access** is changed, existing shared private links and private endpoints may not work correctly. If existing shared private links and private endpoints stop working during a connectivity change operation, wait a few hours for the operation to complete. If they're still not working, try deleting and recreating them.
## Shared private link resource stalled in an "Updating" or "Incomplete" state
Typically, a shared private link resource should go a terminal state (`Succeeded
In rare circumstances, Azure Cognitive Search can fail to correctly mark the state of the shared private link resource to a terminal state (`Succeeded` or `Failed`). This usually occurs due to an unexpected failure. Shared private link resources are automatically transitioned to a `Failed` state if it has been "stuck" in a non-terminal state for more than a few hours.
-If you observe that the shared private link resource has not transitioned to a terminal state, wait for a few hours to ensure that it becomes `Failed` before you can delete it and re-create it. Alternatively, instead of waiting you can try to create another shared private link resource with a different name (keeping all other parameters the same).
+If you observe that the shared private link resource hasn't transitioned to a terminal state, wait for a few hours to ensure that it becomes `Failed` before you can delete it and re-create it. Alternatively, instead of waiting you can try to create another shared private link resource with a different name (keeping all other parameters the same).
## Updating a shared private link resource
Some common errors that occur during the deletion phase are listed below.
| Failure Type | Description | Resolution | | | | |
-| Resource is in non-terminal state | A shared private link resource that's not in a terminal state (`Succeeded` or `Failed`) can't be deleted. It is possible (rare) for a shared private link resource to be stuck in a non-terminal state for up to 8 hours. | Wait until the resource has reached a terminal state and retry the delete request. |
+| Resource is in non-terminal state | A shared private link resource that's not in a terminal state (`Succeeded` or `Failed`) can't be deleted. It's possible (rare) for a shared private link resource to be stuck in a non-terminal state for up to 8 hours. | Wait until the resource has reached a terminal state and retry the delete request. |
| Delete operation failed with error "Conflict" | The Azure Resource Manager operation to delete a shared private link resource reaches out to the resource provider of the target resource specified in `privateLinkResourceId` ("target RP") before it can remove the private endpoint and DNS mappings. Customers can utilize [Azure resource locks](../azure-resource-manager/management/lock-resources.md) to prevent any changes to their resources. When Azure Resource Manager reaches out to the target RP, it requires the target RP to modify the state of the target resource (to remove details about the private endpoint from its metadata). When the target resource has a lock configured on it (or its resource group/subscription), the Azure Resource Manager operation fails with a "Conflict" (and appropriate details). The shared private link resource won't be deleted. | Customers should remove the lock on the target resource before retrying the deletion operation. **Note**: This problem can also occur when customers try to delete a search service with shared private link resources that point to "locked" target resources | | Delete operation failed | The asynchronous Azure Resource Manager delete operation can fail in rare cases. When this operation fails, querying the state of the asynchronous operation will present customers with an error message and appropriate details. | Retry the operation at a later time, or reach out to Azure Support if the problem persists. | Resource stuck in "Deleting" state | In rare cases, a shared private link resource might be stuck in "Deleting" state for up to 8 hours, likely due to some catastrophic failure on the search RP. | Wait for 8 hours, after which the resource would transition to `Failed` state and then reissue the request.|
security Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-overview.md
Azure offers many mechanisms for keeping data private as it moves from one locat
### Data-link Layer encryption in Azure
-Whenever Azure Customer traffic moves between datacenters-- outside physical boundaries not controlled by Microsoft (or on behalf of Microsoft)-- a data-link layer encryption method using the [IEEE 802.1AE MAC Security Standards](https://1.ieee802.org/security/802-1ae/) (also known as MACsec) is applied from point-to-point across the underlying network hardware. The packets are encrypted and decrypted on the devices before being sent, preventing physical ΓÇ£man-in-the-middleΓÇ¥ or snooping/wiretapping attacks. Because this technology is integrated on the network hardware itself, it provides line rate encryption on the network hardware with no measurable link latency increase. This MACsec encryption is on by default for all Azure traffic traveling within a region or between regions, and no action is required on customersΓÇÖ part to enable.
+Whenever Azure Customer traffic moves between datacenters-- outside physical boundaries not controlled by Microsoft (or on behalf of Microsoft)-- a data-link layer encryption method using the [IEEE 802.1AE MAC Security Standards](https://1.ieee802.org/security/802-1ae/) (also known as MACsec) is applied from point-to-point across the underlying network hardware. The packets are encrypted on the devices before being sent, preventing physical ΓÇ£man-in-the-middleΓÇ¥ or snooping/wiretapping attacks. Because this technology is integrated on the network hardware itself, it provides line rate encryption on the network hardware with no measurable link latency increase. This MACsec encryption is on by default for all Azure traffic traveling within a region or between regions, and no action is required on customersΓÇÖ part to enable.
### TLS encryption in Azure
sentinel Kusto Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/kusto-overview.md
SigninLogs
### Sorting data: *sort* / *order*
-The [*sort*](/azure/data-explorer/kusto/query/sortoperator) operator (and the identical [order](/azure/data-explorer/kusto/query/orderoperator) operator) is used to sort your data by a specified column. In the following example, we ordered the results by *TimeGenerated* and set the order direction to descending with the *desc* parameter, placing the highest values first; for ascending order we would use *asc*.
+The [*sort*](/azure/data-explorer/kusto/query/sort-operator) operator (and the identical [order](/azure/data-explorer/kusto/query/orderoperator) operator) is used to sort your data by a specified column. In the following example, we ordered the results by *TimeGenerated* and set the order direction to descending with the *desc* parameter, placing the highest values first; for ascending order we would use *asc*.
> [!NOTE] > The default direction for sorts is descending, so technically you only have to specify if you want to sort in ascending order. However, specifying the sort direction in any case will make your query more readable.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
# What's new in Site Recovery
-The [Azure Site Recovery](site-recovery-overview.md) service is updated and improved on an ongoing basis. To help you stay up-to-date, this article provides you with information about the latest releases, new features, and new content. This page is updated on a regular basis.
+The [Azure Site Recovery](site-recovery-overview.md) service is updated and improved on an ongoing basis. To help you stay up-to-date, this article provides you with information about the latest releases, new features, and new content. This page is updated regularly.
You can follow and subscribe to Site Recovery update notifications in the [Azure updates](https://azure.microsoft.com/updates/?product=site-recovery) channel.
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
-[Rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 9.53.6594.1 | 5.1.8095.0 | 9.53.6594.1 | 5.1.8103.0 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9260.0
+[Rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 9.53.6615.1 | 5.1.8095.0 | 9.53.6615.1 | 5.1.8103.0 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9260.0
[Rollup 65](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 9.52.6522.1 | 5.1.7870.0 | 9.52.6522.1 | 5.1.7870.0 (VMware) & 5.1.7882.0 (Hyper-V) | 2.0.9259.0 [Rollup 64](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 9.51.6477.1 | 5.1.7802.0 | 9.51.6477.1 | 5.1.7802.0 | 2.0.9257.0 [Rollup 63](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 9.50.6419.1 | 5.1.7626.0 | 9.50.6419.1 | 5.1.7626.0 | 2.0.9249.0
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Details** | **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
-**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
**Azure VM disaster recovery** | Added support for Ubuntu 22.04, RHEL 8.7 and Cent OS 8.7 Linux distro. **VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 22.04, RHEL 8.7 and Cent OS 8.7 Linux distro.
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Details** | **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
-**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
**Azure VM disaster recovery** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro. **VMware VM/physical disaster recovery to Azure** | Added support for Debian 11 and SUSE Linux Enterprise Server 15 SP 4 Linux distro.<br/><br/> Added Modernized VMware to Azure DR support for government clouds. [Learn more](deploy-vmware-azure-replication-appliance-modernized.md#allow-urls-for-government-clouds).
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Details** | **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
-**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
**Azure VM disaster recovery** | Added support for Ubuntu 20.04 Linux distro.
-**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 20.04 Linux distro.<br/><br/> Modernized experience to enable disaster recovery of VMware vritual machines is now generally available.[Learn more](https://azure.microsoft.com/updates/vmware-dr-ga-with-asr).<br/><br/> Protecting physical machines modernized experience is now supported.<br/><br/> Portecting machines with private endpoint and managed identity enabled is now supported with modernized experience.
+**VMware VM/physical disaster recovery to Azure** | Added support for Ubuntu 20.04 Linux distro.<br/><br/> Modernized experience to enable disaster recovery of VMware virtual machines is now generally available.[Learn more](https://azure.microsoft.com/updates/vmware-dr-ga-with-asr).<br/><br/> Protecting physical machines modernized experience is now supported.<br/><br/> Protecting machines with private endpoint and managed identity enabled is now supported with modernized experience.
## Updates (August 2022)
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Details** | **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
-**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
**Azure VM disaster recovery** | Added support for Oracle Linux 8.6 Linux distro. **VMware VM/physical disaster recovery to Azure** | Added support for Oracle Linux 8.6 Linux distro.<br/><br/> Introduced the migration capability to move existing replications from classic to modernized experience for disaster recovery of VMware virtual machines, enabled using Azure Site Recovery. [Learn more](move-from-classic-to-modernized-vmware-disaster-recovery.md).
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Details** | **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
-**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
**Azure VM disaster recovery** | Added support for RHEL 8.6 and CentOS 8.6 Linux distros. **VMware VM/physical disaster recovery to Azure** | Added support for RHEL 8.6 and CentOS 8.6 Linux distros.<br/><br/> Added support for configuring proxy bypass rules for VMware and Hyper-V replications, using private endpoints.<br/><br/> Added fixes related to various security issues present in the classic experience. **Hyper-V disaster recovery to Azure** | Added support for configuring proxy bypass rules for VMware and Hyper-V replications, using private endpoints.
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Details** | **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
-**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
-**Azure VM disaster recovery** | Added support for additional kernels for Debian 10 and Ubuntu 20.04 Linux distros. <br/><br/> Added public preview support for on-Demand Capacity Reservation integration.
+**Issue fixes/improvements** | Many fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Added support for more kernels for Debian 10 and Ubuntu 20.04 Linux distros. <br/><br/> Added public preview support for on-Demand Capacity Reservation integration.
**VMware VM/physical disaster recovery to Azure** | Added support for thin provisioned LVM volumes.<br/><br/> ## Updates (January 2022)
For Site Recovery components, we support N-4 versions, where N is the latest rel
| **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
-**Azure VM disaster recovery** | Support added for retention points to be available for up to 15 days.<br/><br/>Added support for replication to be enabled on Azure virtual machines via Azure Policy. <br/><br/> Added support for ZRS managed disks when replicating Azure virtual machines. <br/><br/> Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux Linux 8.4 and Red Hat Enterprise Linux Linux 8.5 <br/><br/>
-**VMware VM/physical disaster recovery to Azure** | Support added for retention points to be available for up to 15 days.<br/><br/>Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux Linux 8.4 and Red Hat Enterprise Linux Linux 8.5 <br/><br/>
+**Azure VM disaster recovery** | Support added for retention points to be available for up to 15 days.<br/><br/>Added support for replication to be enabled on Azure virtual machines via Azure Policy. <br/><br/> Added support for ZRS managed disks when replicating Azure virtual machines. <br/><br/> Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/>
+**VMware VM/physical disaster recovery to Azure** | Support added for retention points to be available for up to 15 days.<br/><br/>Support added for SUSE Linux Enterprise Server 15 SP3, Red Hat Enterprise Linux 8.4 and Red Hat Enterprise Linux 8.5 <br/><br/>
## Updates (November 2021)
For Site Recovery components, we support N-4 versions, where N is the latest rel
> [!NOTE] > Update rollup only provides updates for the public preview of VMware to Azure protections. No other fixes or improvements have been covered in this release.
-> To setup the preview experience, you will have to perform a fresh setup and use a new Recovery Services vault. Updating from existing architecture to new architecture is unsupported.
+> To setup the preview experience, you will have to perform a fresh set up and use a new Recovery Services vault. Updating from existing architecture to new architecture is unsupported.
-This public preview covers a complete overhaul of the current architecture for pretecting VMware machines.
+This public preview covers a complete overhaul of the current architecture for protecting VMware machines.
- [Learn](/azure/site-recovery/vmware-azure-architecture-preview) about the new architecture and the changes introduced.-- Check the pre-requisites and setup the ASR replication appliance by following [these steps](/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview).
+- Check the pre-requisites and set up the Azure Site Recovery replication appliance by following [these steps](/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview).
- [Enable replication](/azure/site-recovery/vmware-azure-set-up-replication-tutorial-preview) for your VMware machines.-- Check out the [automatic upgrade](/azure/site-recovery/upgrade-mobility-service-preview) and [switch](/azure/site-recovery/switch-replication-appliance-preview) capability for ASR replication appliance.
+- Check out the [automatic upgrade](/azure/site-recovery/upgrade-mobility-service-preview) and [switch](/azure/site-recovery/switch-replication-appliance-preview) capability for Azure Site Recovery replication appliance.
## Updates (July 2021)
This public preview covers a complete overhaul of the current architecture for p
**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article. **Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
-**Azure Site Recovery Service** | Made improvements so that enabling replication and re-protect operations are faster by 46%.
+**Azure Site Recovery Service** | Made improvements so that enabling replication and reprotect operations are faster by 46%.
**Azure Site Recovery Portal** | Replication can now be enabled between any two Azure regions around the world. You are no longer limited to enabling replication within your continent.
This public preview covers a complete overhaul of the current architecture for p
| **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup. **Issue fixes/improvements** | A number of fixes and improvements as detailed in the rollup.
-**Azure VM disaster recovery** | Support added for cross-continental disaster recovery of Azure VMs.<br/><br/> REST API support for protection of VMSS Flex.<br/><br/> Now supported for VMs running Oracle Linux 8.2 and 8.3.
+**Azure VM disaster recovery** | Support added for cross-continental disaster recovery of Azure VMs.<br/><br/> REST API support for protection of Virtual Machine Scale Sets Flex.<br/><br/> Now supported for VMs running Oracle Linux 8.2 and 8.3.
**VMware VM/physical disaster recovery to Azure** | Added support for using Ubuntu-20.04 while setting up master target server.<br/><br/> Now supported for VMs running Oracle Linux 8.2 and 8.3.
This public preview covers a complete overhaul of the current architecture for p
| **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup. **Issue fixes/improvements** | A number of fixes and improvements as detailed in the rollup.
-**Azure VM disaster recovery** | Zone to Zone Disaster Recovery using Azure Site Recovery is now GA in 4 more regions ΓÇô North Europe, East US, Central US, and West US 2.<br/>
+**Azure VM disaster recovery** | Zone to Zone Disaster Recovery using Azure Site Recovery is now GA in four more regions ΓÇô North Europe, East US, Central US, and West US 2.<br/>
**VMware VM/physical disaster recovery to Azure** | The update includes portal support for selecting Proximity Placements Groups for VMware/Physical machines after enabling replication.<br/><br/> Protecting VMware machines with data disk size up to 32 TB is now supported. **Hyper-V disaster recovery to Azure** | The update includes portal support for selecting Proximity Placements Groups for Hyper-V machines after enabling replication.
Features added this month are summarized in the table.
**Feature** | **Details** |
-**Linux BRTFS file system** | Site Recovery now supports replication of VMware VMs with the BRTFS file system. Replication isn't supported if:<br/><br/>- The BTRFS file system sub-volume is changed after enabling replication.<br/><br/>- The file system is spread over multiple disks.<br/><br/>- The BTRFS file system supports RAID.
+**Linux BRTFS file system** | Site Recovery now supports replication of VMware VMs with the BRTFS file system. Replication isn't supported if:<br/><br/>- The BTRFS file system subvolume is changed after enabling replication.<br/><br/>- The file system is spread over multiple disks.<br/><br/>- The BTRFS file system supports RAID.
**Windows Server 2019** | Support added for machines running Windows Server 2019.
Features added this month are summarized in the table.
| **Linux support** | Support was added for Oracle Linux 6.8, Oracle Linux 6.9 and Oracle Linux 7.0 with the Red Hat Compatible Kernel, and for the Unbreakable Enterprise Kernel (UEK) Release 5. **Linux BRTFS file system** | Supported for Azure VMs.
-**Azure VMs in availability zones** | You can enable replication to another region for Azure VMs deployed in availability zones. You can now enable replication for an Azure VM, and set the target for failover to a single VM instance, a VM in an availability set, or a VM in an availability zone. The setting doesn't impact replication. [Read](https://azure.microsoft.com/blog/disaster-recovery-of-zone-pinned-azure-virtual-machines-to-another-region/) the announcement.
+**Azure VMs in availability zones** | You can enable replication to another region for Azure VMs deployed in availability zones. You can now enable replication for an Azure VM, and set the target for failover to a single VM instance, a VM in an availability set, or a VM in an availability zone. The setting doesn't affect replication. [Read](https://azure.microsoft.com/blog/disaster-recovery-of-zone-pinned-azure-virtual-machines-to-another-region/) the announcement.
**Firewall-enabled storage (portal/PowerShell)** | Support added for [firewall-enabled storage accounts](../storage/common/storage-network-security.md).<br/><br/> You can replicate Azure VMs with unmanaged disks on firewall-enabled storage accounts to another Azure region for disaster recovery.<br/><br/> You can use firewall-enabled storage accounts as target storage accounts for unmanaged disks.<br/><br/> Supported in portal and using PowerShell. ## Updates (December 2018)
spring-apps How To Configure Health Probes Graceful Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-health-probes-graceful-termination.md
This section provides answers to frequently asked questions about using health p
"startupProbe": null, "livenessProbe": { "disableProbe": false,
- "failureThreshold": 24,
- "initialDelaySeconds": 60,
+ "failureThreshold": 3,
+ "initialDelaySeconds": 300,
"periodSeconds": 10, "probeAction": { "type": "TCPSocketAction" }, "successThreshold": 1,
- "timeoutSeconds": 1
+ "timeoutSeconds": 3
}, "readinessProbe": { "disableProbe": false, "failureThreshold": 3, "initialDelaySeconds": 0,
- "periodSeconds": 10,
+ "periodSeconds": 5,
"probeAction": { "type": "TCPSocketAction" }, "successThreshold": 1,
- "timeoutSeconds": 1
+ "timeoutSeconds": 3
} ```
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
For more information on pricing differences between standard-priority and high-p
## Copy an archived blob to an online tier
-The first option for moving a blob from the Archive tier to an online tier is to copy the archived blob to a new destination blob that is in either the Hot or Cool tier. You can use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy the blob. When you copy an archived blob to a new blob an online tier, the source blob remains unmodified in the Archive tier.
+The first option for moving a blob from the Archive tier to an online tier is to copy the archived blob to a new destination blob that is in either the Hot or Cool tier. You can use the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy the blob. When you copy an archived blob to a new blob in an online tier, the source blob remains unmodified in the Archive tier.
You must copy the archived blob to a new blob with a different name or to a different container. You can't overwrite the source blob by copying to the same blob.
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
+
+ Title: Use blob access tiers with JavaScript
+
+description: Learn how to add or change a blob's access tier in your Azure Storage account using the JavaScript client library.
++++++ Last updated : 02/22/2023+
+ms.devlang: javascript
+++
+# Using access tiers
+
+This article shows how to use [access tiers](access-tiers-overview.md) for block blobs with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob).
+
+## Understand block blob access tiers
+
+Data stored in the cloud grows at an exponential pace. To manage costs for your expanding storage needs, it can be helpful to organize your data based on how frequently it will be accessed and how long it will be retained. Azure storage offers different access tiers so that you can store your blob data in the most cost-effective manner based on how it's being used. Azure Storage access tiers include:
+
+- [**Online tiers**](access-tiers-overview.md#online-access-tiers)
+ - **Hot tier** - An online tier optimized for storing data that is accessed or modified frequently. The hot tier has the highest storage costs, but the lowest access costs.
+ - **Cool tier** - An online tier optimized for storing data that is infrequently accessed or modified. Data in the cool tier should be stored for a minimum of 30 days. The cool tier has lower storage costs and higher access costs compared to the hot tier.
+- [**Archive tier**](access-tiers-overview.md#archive-access-tier) - An offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the archive tier should be stored for a minimum of 180 days.
+
+## Restrictions
+
+Setting the access tier is only allowed on block blobs. To learn more about restrictions on setting a block blob's access tier, see [Set Blob Tier (REST API)](/rest/api/storageservices/set-blob-tier#remarks).
+
+## Set a blob's access tier during upload
+
+To [upload](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-upload) a blob into a specific access tier, use the [BlockBlobUploadOptions](/javascript/api/@azure/storage-blob/blockblobuploadoptions). The `tier` property choices are: `Hot`, `Cool`, or `Archive`.
++
+## Change a blob's access tier after upload
+
+To change the access tier of a blob after it's uploaded to storage, use [setAccessTier](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-setaccesstier). Along with the tier, you can set the [BlobSetTierOptions](/javascript/api/@azure/storage-blob/blobsettieroptions) property [rehydration priority](archive-rehydrate-overview.md) to bring the block blob out of an archived state. Possible values are `High` or `Standard`.
++
+## Copy a blob into a different access tier
+
+Use the BlobClient.[beginCopyFromURL](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-begincopyfromurl) method to copy a blob. To change the access tier during the copy operation, use the [BlobBeginCopyFromURLOptions](/javascript/api/@azure/storage-blob/blobbegincopyfromurloptions) `tier` property and specify a different access [tier](storage-blob-storage-tiers.md) than the source blob.
++
+## Use a batch to change access tier for many blobs
+
+The batch represents an aggregated set of operations on blobs, such as [delete](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-deleteblobs-1) or [set access tier](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-setblobsaccesstier-1). You need to pass in the correct credential to successfully perform each operation. In this example, the same credential is used for a set of blobs in the same container.
+
+Create a [BlobBatchClient](/javascript/api/@azure/storage-blob/blobbatchclient). Use the client to create a batch with the [createBatch()](/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-createbatch) method. When the batch is ready, [submit]/javascript/api/@azure/storage-blob/blobbatchclient#@azure-storage-blob-blobbatchclient-submitbatch) the batch for processing. Use the returned structure to validate each blob's operation was successful.
+
+
+## Code samples
+
+* [Set blob's access tier during upload](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-string-with-access-tier.js)
+* [Change blob's access tier after upload](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/change-blob-access-tier.js)
+* [Copy blob into different access tier](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/copy-blob-to-different-access-tier.js)
+* [Use a batch to change access tier for many blobs](https://github.com/Azure-Samples/AzureStorageSnippets/tree/master/blobs/howto/JavaScript/NodeJS-v12/dev-guide/batch-set-access-tier.js)
+
+## Next steps
+
+- [Access tiers best practices](access-tiers-best-practices.md)
+- [Blob rehydration from the Archive tier](archive-rehydrate-overview.md)
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
AzCopy uses [server-to-server](/rest/api/storageservices/put-block-from-url) [AP
See the [Get started with AzCopy](storage-use-azcopy-v10.md) article to download AzCopy and learn about the ways that you can provide authorization credentials to the storage service. > [!NOTE]
-> The examples in this article assume that you've provided authorization credentials by using Azure Active Directory (Azure AD) and that your Azure AD identity has the proper role assignments for the destination account. The source account, if different from the destination, must use a SAS token with the proper read permissions or allow public access. For example: azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>'.
+> The examples in this article assume that you've provided authorization credentials by using Azure Active Directory (Azure AD) and that your Azure AD identity has the proper role assignments for both source and destination accounts.
>
-> Alternatively you can also append a SAS token to the destination URL in each AzCopy command. For example: azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>'.
+> Alternatively you can append a SAS token to either the source or destination URL in each AzCopy command. For example: `azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>'`.<blob-path><SAS-token>'.
## Guidelines
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
description: Learn how to connect to an Azure Elastic SAN (preview) volume from
Previously updated : 02/17/2023 Last updated : 02/22/2023
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
description: Learn how to connect to an Azure Elastic SAN (preview) volume from
Previously updated : 02/17/2023 Last updated : 02/22/2023
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
description: Learn how to deploy an Azure Elastic SAN (preview) with the Azure p
Previously updated : 11/07/2022 Last updated : 02/22/2023
storage Elastic San Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md
description: Learn how to delete an Azure Elastic SAN (preview) with the Azure p
Previously updated : 10/12/2022 Last updated : 02/22/2023
storage Elastic San Expand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-expand.md
description: Learn how to increase the size of an Azure Elastic SAN (preview) an
Previously updated : 10/12/2022 Last updated : 02/22/2023
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
description: An overview of Azure Elastic SAN (preview), a service that enables
Previously updated : 11/17/2022 Last updated : 02/22/2023
The status of items in this table may change over time.
## Next steps
+For a video introduction to Azure Elastic SAN, see [Accelerate your SAN migration to the cloud](/shows/inside-azure-for-it/accelerate-your-san-migration-to-the-cloud).
+ [Plan for deploying an Elastic SAN (preview)](elastic-san-planning.md)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
description: An overview of Azure Elastic SAN (preview), a service that enables
Previously updated : 10/27/2022 Last updated : 02/22/2023
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
description: Understand planning for an Azure Elastic SAN deployment. Learn abou
Previously updated : 11/17/2022 Last updated : 02/22/2023
The following iSCSI features aren't currently supported:
## Next steps
+For a video that goes over the general planning and deployment with a few example scenarios, see [Getting started with Azure Elastic SAN](/shows/inside-azure-for-it/getting-started-with-azure-elastic-san).
+ [Deploy an Elastic SAN (preview)](elastic-san-create.md)
stream-analytics No Code Power Bi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-power-bi-tutorial.md
Title: Build real-time dashboard with Azure Synapse Analytics and Power BI
+ Title: Build real-time dashboard with Azure Stream Analytics no-code editor, Synapse Analytics and Power BI
description: Use no code editor to compute aggregations and write to Azure Synapse Analytics and build real-time dashboards using Power BI Previously updated : 02/17/2023 Last updated : 02/23/2023
synapse-analytics Apache Spark 33 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-33-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.3
-description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.3.
+ Title: Azure Synapse Runtime for Apache Spark 3.3
+description: New runtime is GA and ready for production workloads. Spark 3.3.1, Python 3.10, Delta Lake 2.2.
Last updated 11/17/2022 -+
-# Azure Synapse Runtime for Apache Spark 3.3 (Preview)
+# Azure Synapse Runtime for Apache Spark 3.3 (GA)
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.3. > [!IMPORTANT]
-> * Azure Synapse Runtime for Apache Spark 3.3 is currently in Public Preview.
-> * We are actively rolling out the final changes to all production regions with the goal of ensuring a seamless implementation. As we monitor the stability of these updates, we tentatively anticipate a general availability date of February 23rd. Please note that this is subject to change and we will provide updates as they become available.
+> * Azure Synapse Runtime for Apache Spark 3.3 has been in public preview since Nov 2022. As of Feb 23, 2023, we are excited to announce that after notable improvements in performance and stability, Azure Synapse Runtime for Apache Spark 3.3 now becomes generally available and ready for production workloads.
## Component versions
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
| R (Preview) | 4.2.2 | >[!NOTE]
-> * The [.NET for Apache Spark](https://github.com/dotnet/spark) is an open-source project under the .NET Foundation that currently requires the .NET 3.1 library, which has reached the out-of-support status. We would like to inform users of Azure Synapse Spark of the removal of the .NET for Apache Spark library in the Azure Synapse Runtime for Apache Spark version 3.3. Users may refer to the [.NET Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) for more details on this matter. As a result, it will no longer be possible for users to utilize Apache Spark APIs via C# and F#, or execute C# code in notebooks within Synapse or through Apache Spark Job definitions in Synapse. It is important to note that this change affects only Azure Synapse Runtime for Apache Spark 3.3 and above. We will continue to support .NET for Apache Spark in all previous versions of the Azure Synapse Runtime according to [their lifecycle stages](/runtime-for-apache-spark-lifecycle-and-supportability.md). However, we do not have plans to support .NET for Apache Spark in Azure Synapse Runtime for Apache Spark 3.3 and future versions. We recommend that users with existing workloads written in C# or F# migrate to Python or Scala. Users are advised to take note of this information and plan accordingly.
+> * The [.NET for Apache Spark](https://github.com/dotnet/spark) is an open-source project under the .NET Foundation that currently requires the .NET 3.1 library, which has reached the out-of-support status. We would like to inform users of Azure Synapse Spark of the removal of the .NET for Apache Spark library in the Azure Synapse Runtime for Apache Spark version 3.3. Users may refer to the [.NET Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) for more details on this matter. As a result, it will no longer be possible for users to utilize Apache Spark APIs via C# and F#, or execute C# code in notebooks within Synapse or through Apache Spark Job definitions in Synapse. It is important to note that this change affects only Azure Synapse Runtime for Apache Spark 3.3 and above. We will continue to support .NET for Apache Spark in all previous versions of the Azure Synapse Runtime according to [their lifecycle stages](runtime-for-apache-spark-lifecycle-and-supportability.md). However, we do not have plans to support .NET for Apache Spark in Azure Synapse Runtime for Apache Spark 3.3 and future versions. We recommend that users with existing workloads written in C# or F# migrate to Python or Scala. Users are advised to take note of this information and plan accordingly.
## Libraries
synapse-analytics Apache Spark Azure Portal Add Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
description: Learn how to add and manage libraries used by Apache Spark in Azure
Previously updated : 11/03/2022 Last updated : 02/20/2023
To make third-party or locally built code available to your applications, instal
## Overview of package levels
-There are three levels of packages installed on Azure Synapse Analytics:
+There are three levels of packages installed on Azure Synapse Analytics:
-- **Default**: Default packages include a full Anaconda installation, plus extra commonly used libraries. For a full list of libraries, see [Apache Spark version support](apache-spark-version-support.md).
+- **Default**: Default packages include a full Anaconda installation, plus extra commonly used libraries. For a full list of libraries, see [Apache Spark version support](apache-spark-version-support.md).
- When a Spark instance starts, these libraries are included automatically. You can add more packages at the other levels.
-- **Spark pool**: All running artifacts can use packages at the Spark pool level. For example, you can attach notebook and Spark job definitions to corresponding Spark pools.
+ When a Spark instance starts, these libraries are included automatically. You can add more packages at the other levels.
+- **Spark pool**: All running artifacts can use packages at the Spark pool level. For example, you can attach notebook and Spark job definitions to corresponding Spark pools.
You can upload custom libraries and a specific version of an open-source library that you want to use in your Azure Synapse Analytics workspace. The workspace packages can be installed in your Spark pools. - **Session**: A session-level installation creates an environment for a specific notebook session. The change of session-level libraries isn't persisted between sessions. > [!NOTE]
-> Pool-level library management can take time, depending on the size of the packages and the complexity of required dependencies. We recommend the session-level installation for experimental and quick iterative scenarios.
-
+>
+> - Pool-level library management can take time, depending on the size of the packages and the complexity of required dependencies. We recommend the session-level installation for experimental and quick iterative scenarios.
+> - The pool-level library management will produce a stable dependency for running your Notebooks and Spark job definitions. Installing the library to your Spark pool is highly recommended for the pipeline runs.
+> - Session level library management can help you with fast iteration or dealing with the frequent changes of library. However, the stability of session level installation is not promised. Also, in-line commands like %pip and %conda are disabled in pipeline run. Managing library in Notebook session is recommended during the developing phase.
+ ## Manage workspace packages When your team develops custom applications or models, you might develop various code artifacts like *.whl*, *.jar*, or *tar.gz* files to package your code.
In some cases, you might want to standardize the packages that are used on an Ap
By using the pool management capabilities of Azure Synapse Analytics, you can configure the default set of libraries to install on a serverless Apache Spark pool. These libraries are installed on top of the [base runtime](./apache-spark-version-support.md).
-Currently, pool management is supported only for Python. For Python, Azure Synapse Spark pools use Conda to install and manage Python package dependencies.
-
-When you're specifying pool-level libraries, you can now provide a *requirements.txt* or *environment.yml* file. This environment configuration file is used every time a Spark instance is created from that Spark pool.
+For Python libraries, Azure Synapse Spark pools use Conda to install and manage Python package dependencies. You can specify the pool-level Python libraries by providing a *requirements.txt* or *environment.yml* file. This environment configuration file is used every time a Spark instance is created from that Spark pool. You can also attach the workspace packages to your pools.
To learn more about these capabilities, see [Manage Spark pool packages](./apache-spark-manage-pool-packages.md). > [!IMPORTANT]
+>
> - If the package that you're installing is large or takes a long time to install, it might affect the Spark instance's startup time. > - Altering the PySpark, Python, Scala/Java, .NET, or Spark version is not supported.
-## Manage dependencies for DEP-enabled Azure Synapse Spark pools
+### Manage dependencies for DEP-enabled Azure Synapse Spark pools
> [!NOTE] > Installing packages from a public repo is not supported within [DEP-enabled workspaces](../security/workspace-data-exfiltration-protection.md). Instead, upload all your dependencies as workspace libraries and install them to your Spark pool.
If you're having trouble identifying required dependencies, follow these steps:
source activate synapse-env ```
-1. Run the following script to identify the required dependencies.
+2. Run the following script to identify the required dependencies.
The script can be used to pass your *requirements.txt* file, which has all the packages and versions that you intend to install in the Spark 3.1 or Spark 3.2 pool. It will print the names of the *new* wheel files/dependencies for your input library requirements. ```python
The script can be used to pass your *requirements.txt* file, which has all the p
pip install -r <input-user-req.txt> > pip_output.txt cat pip_output.txt | grep "Using cached *" ```+ > [!NOTE] > This script will list only the dependencies that are not already present in the Spark pool by default.
Session-scoped packages allow users to define package dependencies at the start
To learn more about how to manage session-scoped packages, see the following articles: -- [Python session packages](./apache-spark-manage-session-packages.md#session-scoped-python-packages): At the start of a session, provide a Conda *environment.yml* file to install more Python packages from popular repositories.
+- [Python session packages](./apache-spark-manage-session-packages.md#session-scoped-python-packages): At the start of a session, provide a Conda *environment.yml* file to install more Python packages from popular repositories. Or you can use %pip and %conda commands to manage libraries in the Notebook code cells.
- [Scal#session-scoped-java-or-scala-packages): At the start of your session, provide a list of *.jar* files to install by using `%%configure`. - [R session packages](./apache-spark-manage-session-packages.md#session-scoped-r-packages-preview): Within your session, you can install packages across all nodes within your Spark pool by using `install.packages` or `devtools`.
-## Manage your packages outside the Azure Synapse Analytics UI
+
+## Automate the library management process through Azure PowerShell cmdlets and REST APIs
If your team wants to manage libraries without visiting the package management UIs, you have the option to manage the workspace packages and pool-level package updates through Azure PowerShell cmdlets or REST APIs for Azure Synapse Analytics.
synapse-analytics Apache Spark Manage Packages Outside UI https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-manage-packages-outside-UI.md
description: Learn how to manage packages using Azure PowerShell cmdlets or REST
Previously updated : 07/07/2022 Last updated : 02/23/2023
-# Manage packages outside Synapse Analytics Studio UIs
+# Automate the library management process through Azure PowerShell cmdlets and REST APIs
You may want to manage your libraries for your serverless Apache Spark pools without going into the Synapse Analytics UI pages. For example, you may find that:
In this article, we'll provide a general guide to help you managing libraries th
## Manage packages through Azure PowerShell cmdlets ### Add new libraries+ 1. [New-AzSynapseWorkspacePackage](/powershell/module/az.synapse/new-azsynapseworkspacepackage) command can be used to **upload new libraries to workspace**. ```powershell
In this article, we'll provide a general guide to help you managing libraries th
``` ### Remove libraries+ 1. In order to **remove a installed package** from your Spark pool, please refer to the command combination of [Get-AzSynapseWorkspacePackage](/powershell/module/az.synapse/get-azsynapseworkspacepackage) and [Update-AzSynapseSparkPool](/powershell/module/az.synapse/update-azsynapsesparkpool). ```powershell
In this article, we'll provide a general guide to help you managing libraries th
Update-AzSynapseSparkPool -WorkspaceName ContosoWorkspace -Name ContosoSparkPool -PackageAction Remove -Package $package ```
-2. You can also retrieve a Spark pool and **remove all attached workspace libraries** from the pool by calling [Get-AzSynapseSparkPool](/powershell/module/az.synapse/get-azsynapsesparkpool) and [Update-AzSynapseSparkPool](/powershell/module/az.synapse/update-azsynapsesparkpool) commands.
+2. You can also retrieve a Spark pool and **remove all attached workspace libraries** from the pool by calling [Get-AzSynapseSparkPool](/powershell/module/az.synapse/get-azsynapsesparkpool) and [Update-AzSynapseSparkPool](/powershell/module/az.synapse/update-azsynapsesparkpool) commands.
+ ```powershell $pool = Get-AzSynapseSparkPool -ResourceGroupName ContosoResourceGroup -WorkspaceName ContosoWorkspace -Name ContosoSparkPool $pool | Update-AzSynapseSparkPool -PackageAction Remove -Package $pool.WorkspacePackages
In this article, we'll provide a general guide to help you managing libraries th
For more Azure PowerShell cmdlets capabilities, please refer to [Azure PowerShell cmdlets for Azure Synapse Analytics](/powershell/module/az.synapse). - ## Manage packages through REST APIs ### Manage the workspace packages
-With the ability of REST APIs, you can add/delete packages or list all uploaded files of your workspace. See the full supported APIs, please refer to [Overview of workspace library APIs](/rest/api/synapse/data-plane/library).
+With the ability of REST APIs, you can add/delete packages or list all uploaded files of your workspace. See the full supported APIs, please refer to [Overview of workspace library APIs](/rest/api/synapse/data-plane/library).
### Manage the Spark pool packages+ You can leverage the [Spark pool REST API](/rest/api/synapse/big-data-pools/create-or-update) to attach or remove your custom or open source libraries to your Spark pools. 1. For custom libraries, please specify the list of custom files as the **customLibraries** property in request body.
You can leverage the [Spark pool REST API](/rest/api/synapse/big-data-pools/crea
``` ## Next steps+ - View the default libraries: [Apache Spark version support](apache-spark-version-support.md) - Manage Spark pool level packages through Synapse Studio portal: [Python package management on Notebook Session](./apache-spark-manage-session-packages.md#session-scoped-python-packages)
synapse-analytics Apache Spark Manage Session Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-manage-session-packages.md
description: Learn how to add and manage libraries on Spark Notebook sessions fo
Previously updated : 07/07/2020 Last updated : 02/20/2023
# Manage session-scoped packages
-In addition to pool level packages, you can also specify session-scoped libraries at the beginning of a notebook session. Session-scoped libraries let you specify and use custom Python environments, jar, and R packages within a notebook session.
+In addition to pool level packages, you can also specify session-scoped libraries at the beginning of a notebook session. Session-scoped libraries let you specify and use Python, jar, and R packages within a notebook session.
-When using session-scoped libraries, it is important to keep the following points in mind:
+When using session-scoped libraries, it's important to keep the following points in mind:
- - When you install session-scoped libraries, only the current notebook has access to the specified libraries.
- - These libraries will not impact other sessions or jobs using the same Spark pool.
- - These libraries are installed on top of the base runtime and pool level libraries.
- - Notebook libraries will take the highest precedence.
+- When you install session-scoped libraries, only the current notebook has access to the specified libraries.
+- These libraries have no impact on other sessions or jobs using the same Spark pool.
+- These libraries install on top of the base runtime and pool level libraries, and take the highest precedence.
+- Session-scoped libraries don't persist across sessions.
## Session-scoped Python packages
+### Manage session-scoped Python packages through *environment.yml* file
+ To specify session-scoped Python packages:
-1. Navigate to the selected Spark pool and ensure that you have enabled session-level libraries. You can enable this setting by navigating to the **Manage** > **Apache Spark pool** > **Packages** tab.
+1. Navigate to the selected Spark pool and ensure that you have enabled session-level libraries. You can enable this setting by navigating to the **Manage** > **Apache Spark pool** > **Packages** tab.
:::image type="content" source="./media/apache-spark-azure-portal-add-libraries/enable-session-packages.png" alt-text="Screenshot of enabling session packages." lightbox="./media/apache-spark-azure-portal-add-libraries/enable-session-packages.png":::
-2. Once the setting has been applied, you can open a notebook and select **Configure Session**> **Packages**.
+2. Once the setting applies, you can open a notebook and select **Configure Session**> **Packages**.
![Screenshot of specifying session packages.](./media/apache-spark-azure-portal-add-libraries/update-session-notebook.png "Update session configuration") ![Screenshot of uploading Yml file.](./media/apache-spark-azure-portal-add-libraries/upload-session-notebook-yml.png)
-3. Here, you can upload a Conda *environment.yml* file to install or upgrade packages within a session. Once you start your session, the specified libraries will be installed. Once your session ends, these libraries will no longer be available as they are specific to your session.
+3. Here, you can upload a Conda *environment.yml* file to install or upgrade packages within a session.The specified libraries are present once the session starts. These libraries will no longer be available after session ends.
+
+### Manage session-scoped Python packages through *%pip* and *%conda* commands
+
+You can use the popular *%pip* and *%conda* commands to install additional third party libraries or your custom libraries during your Apache Spark notebook session. In this section, we use *%pip* commands to demonstrate several common scenarios.
+
+### Install a third party package
+
+You can easily install a Python library from [PyPI](https://pypi.org/).
+
+```python
+# Install vega_datasets
+%pip install altair vega_datasets
+```
+
+To verify the installing result, you can run the following code to visualize vega_datasets
+
+```python
+# Create a scatter plot
+# Plot Miles per gallon against the horsepower across different region
+
+import altair as alt
+from vega_datasets import data
+
+cars = data.cars()
+alt.Chart(cars).mark_point().encode(
+ x='Horsepower',
+ y='Miles_per_Gallon',
+ color='Origin',
+).interactive()
+```
+
+### Install a wheel package from storage account
+
+In order to install library from storage, you need to mount to your storage account by running following commands.
+
+```python
+from notebookutils import mssparkutils
+
+mssparkutils.fs.mount(
+ "abfss://<<file system>>@<<storage account>.dfs.core.windows.net",
+ "/<<path to wheel file>>",
+ {"linkedService":"<<storage name>>"}
+)
+```
+
+And then, you can use the *%pip install* command to install the required wheel package
+
+```python
+%pip install /<<path to wheel file>>/<<wheel package name>>.whl
+```
+
+### Install another version of built-in library
-### Verify installed libraries
+You can use the following command to see what's the built-in version of certain package. We use *pandas* as an example
-To verify if the correct versions of the correct libraries are installed from PyPI, run the following code:
+```python
+%pip show pandas
+```
+
+The result is as following log:
+
+```markdown
+Name: pandas
+Version: **1.2.3**
+Summary: Powerful data structures for data analysis, time series, and statistics
+Home-page: https://pandas.pydata.org
+... ...
+```
+
+You can use the following command to switch *pandas* to another version, let's say *1.2.4*
+
+```python
+%pip install pandas==1.2.4
+```
+
+### Uninstall a session-scoped library
+
+If you want to uninstall a package, which installed on this notebook session, you may refer to following commands. However, you cannot uninstall the built-in packages.
```python
-import pkg_resources
-for d in pkg_resources.working_set:
- print(d)
+%pip uninstall altair vega_datasets --yes
```
-In some cases, to view the package versions from Conda, you may need to inspect the package version individually.
+### Using *%pip* command to install libraries from a *requirement.txt* file
+
+```python
+%pip install -r /<<path to requirement file>>/requirements.txt
+```
+
+> [!NOTE]
+>
+> - We recommend you to put the *%pip* and *%conda* commands at the beginning of your notebook if you want to install new libraries. The python interpreter will be restarted after the session-level library being managed to bring the changes effective.
+> - You can refer to this [%pip commands](https://pip.pypa.io/en/stable/cli/) and [%conda commands](https://docs.conda.io/projects/conda/en/latest/commands.html) for the full list of available commands.
## Session-scoped Java or Scala packages
To specify session-scoped Java or Scala packages, you can use the ```%%configure
} ```
-We recommend you to run the %%configure at the beginning of your notebook. You can refer to this [document](https://github.com/cloudera/livy#request-body) for the full list of valid parameters.
+> [!NOTE]
+>
+> - We recommend you to run the %%configure at the beginning of your notebook. You can refer to this [document](https://github.com/cloudera/livy#request-body) for the full list of valid parameters.
## Session-scoped R packages (Preview)
-Azure Synapse Analytics pools include many popular R libraries out-of-the-box. You can also install additional 3rd party libraries during your Apache Spark notebook session.
+Azure Synapse Analytics pools include many popular R libraries out-of-the-box. You can also install extra third party libraries during your Apache Spark notebook session.
-Session-scoped R libraries allow you to customize the R environment for a specific notebook session. When you install an R session-scoped library, only the notebook associated with that notebook session will have access to the newly installed libraries. Other notebooks or sessions using the same Spark pool definition will not be impacted. In addition, session-scoped R libraries do not persist across sessions. These libraries will be installed at the start of each session when the installation commands are executed. Last, session-scoped R libraries are automatically installed across both the driver and worker nodes.
+> [!NOTE]
+>
+> - These commands of managing R libraries will be disabled when running pipeline jobs. If you want to install a package within a pipeline, you must leverage the library management capabilities at the pool level.
+> - Session-scoped R libraries are automatically installed across both the driver and worker nodes.
### Install a package
You can easily install an R library from [CRAN](https://cran.r-project.org/).
install.packages(c("nycflights13", "Lahman")) ```
-You can also leverage CRAN snapshots as the repository to ensure that the same package version is downloaded each time.
+You can also use CRAN snapshots as the repository to ensure to download the same package version each time.
```r install.packages("highcharter", repos = "https://cran.microsoft.com/snapshot/2021-07-16/") ```
-> [!NOTE]
-> These functions will be disabled when running pipeline jobs. If you want to install a package within a pipeline, you must leverage the library management capabilities at the pool level.
- ### Using devtools to install packages The ```devtools``` library simplifies package development to expedite common tasks. This library is installed within the default Azure Synapse Analytics runtime.
Currently, the following ```devtools``` functions are supported within Azure Syn
| install_local() | Installs from a local file on disk | | install_version() | Installs from a specific version on CRAN |
-> [!NOTE]
-> These functions will be disabled when running pipeline jobs. If you want to install a package within a pipeline, you must leverage the library management capabilities at the pool level.
- ### View installed libraries You can query all the libraries installed within your session using the ```library``` command.
packageVersion("caesar")
### Remove an R package from a session
-You can use the ```detach``` function to remove a library from the namespace. These libraries will stay on disk until they are loaded again.
+You can use the ```detach``` function to remove a library from the namespace. These libraries stay on disk until they're loaded again.
```r # detach a library
You can use the ```detach``` function to remove a library from the namespace. Th
detach("package: caesar") ```
-To remove a session-scoped package from a notebook, use the ```remove.packages()``` command. This will not impact other sessions on the same cluster. Users cannot uninstall or remove libraries that are installed as part of the default Azure Synapse Analytics runtime.
+To remove a session-scoped package from a notebook, use the ```remove.packages()``` command. This library change has no impact on other sessions on the same cluster. Users can't uninstall or remove built-in libraries of the default Azure Synapse Analytics runtime.
```r remove.packages("caesar") ``` > [!NOTE]
-> You cannot remove core packages like SparkR, SparklyR, or R.
+> You can't remove core packages like SparkR, SparklyR, or R.
### Session-scoped R libraries and SparkR
spark.lapply(docs, str_length_function)
### Session-scoped R libraries and SparklyR
-With spark_apply() in SparklyR, you can use any R package inside Spark. By default, in sparklyr::spark_apply(), the packages argument is set to FALSE. This copies libraries in the current libPaths to the workers, allowing you to import and use them on workers. For example, you can run the following to generate a caesar-encrypted message with sparklyr::spark_apply():
+With spark_apply() in SparklyR, you can use any R package inside Spark. By default, in sparklyr::spark_apply(), the packages argument sets to FALSE. This copies libraries in the current libPaths to the workers, allowing you to import and use them on workers. For example, you can run the following to generate a caesar-encrypted message with sparklyr::spark_apply():
```r install.packages("caesar", repos = "https://cran.microsoft.com/snapshot/2021-07-16/")
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
The following table lists the runtime name, Apache Spark version, and release da
| Runtime name | Release date | Release stage | End of life announcement date | End of life effective date | |-|-|-|-|-|
-| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | Public Preview | - | - |
+| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | - | - |
| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | GA | July 8, 2023 | July 8, 2024 | | [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 | | [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life Announced (EOLA)__ | __July 29, 2022__ | __September 29, 2023__ |
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 01/09/2023 Last updated : 02/23/2023
New versions of the Azure Virtual Desktop Agent are installed automatically. Whe
| Release | Latest version | |||
-| Generally available | 1.0.5739.9000/1.0.5739.9800 |
+| Generally available | 1.0.6028.2200 |
| In-flight | N/A |
+## Version 1.0.6028.2200
+
+This update was released in February 2023 and includes the following changes:
+
+- Domain Trust health check is now enabled. When virtual machines (VMs) fail the Domain Trust health check, they're now given the "Unavailable" status.
+- General improvements and bug fixes.
+ ## Version 1.0.5739.9000/1.0.5739.9800 >[!NOTE]
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 managed disk
description: Learn how to deploy a Premium SSD v2. Previously updated : 02/08/2023 Last updated : 02/22/2023
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
description: Learn about incremental snapshots for managed disks, including how
Previously updated : 01/25/2023 Last updated : 02/22/2023
New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName
``` > [!IMPORTANT]
-> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
+> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
You can identify incremental snapshots from the same disk with the `SourceResourceId` and the `SourceUniqueId` properties of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. `SourceUniqueId` is the value inherited from the `UniqueId` property of the disk. If you delete a disk and then create a new disk with the same name, the value of the `UniqueId` property changes.
$incrementalSnapshots
[!INCLUDE [virtual-machines-disks-incremental-snapshots-portal](../../includes/virtual-machines-disks-incremental-snapshots-portal.md)] > [!IMPORTANT]
-> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
+> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
# [Resource Manager Template](#tab/azure-resource-manager)
You can also use Azure Resource Manager templates to create an incremental snaps
} ``` > [!IMPORTANT]
-> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
+> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details.
## Check status of snapshots or disks
-Incremental snapshots of Ultra Disks (preview) can't be used to create new disks until the background process copying the data into the snapshot has completed. Similarly, Ultra Disks created from incremental snapshots can't be attached to a VM until the background process copying the data into the disk has completed.
+Incremental snapshots of Premium SSD v2 or Ultra Disks (preview) can't be used to create new disks until the background process copying the data into the snapshot has completed. Similarly, Premium SSD v2 or Ultra Disks created from incremental snapshots can't be attached to a VM until the background process copying the data into the disk has completed.
You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to check the status of the background copy from a disk to a snapshot and you can use the [Check disk creation status](#check-disk-creation-status) section to check the status of a background copy from a snapshot to a disk.
$targetSnapshot.CompletionPercent
### Check disk creation status
-When creating a disk from an Ultra Disk snapshot, you must wait for the background copy process to complete before you can attach it. Currently, you must use the Azure CLI to check the progress of the copy process.
+When creating a disk from either a Premium SSD v2 or an Ultra Disk snapshot, you must wait for the background copy process to complete before you can attach it. Currently, you must use the Azure CLI to check the progress of the copy process.
The following script gives you the status of an individual disk's copy process. The value of `completionPercent` must be 100 before the disk can be attached.
az disk show -n $diskName -g $resourceGroupName --query [completionPercent] -o t
## Check sector size
-Snapshots with a 4096 logical sector size can only be used to create Ultra Disks. They can't be used to create other disk types. Snapshots of disks with 4096 logical sector size are stored as VHDX, whereas snapshots of disks with 512 logical sector size are stored as VHD. Snapshots inherit the logical sector size from the parent disk.
+Snapshots with a 4096 logical sector size can only be used to create Premium SSD v2 or Ultra Disks. They can't be used to create other disk types. Snapshots of disks with 4096 logical sector size are stored as VHDX, whereas snapshots of disks with 512 logical sector size are stored as VHD. Snapshots inherit the logical sector size from the parent disk.
-To determine whether or your Ultra Disk snapshot is a VHDX or a VHD, get the `LogicalSectorSize` property of the snapshot.
+To determine whether or your Premium SSD v2 or Ultra Disk snapshot is a VHDX or a VHD, get the `LogicalSectorSize` property of the snapshot.
The following command displays the logical sector size of a snapshot:
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/instance-metadata-service.md
+
+ Title: Azure Instance Metadata Service for virtual machines
+description: Learn about the Azure Instance Metadata Service and how it provides information about currently running virtual machine instances in Linux.
+++++ Last updated : 02/22/2023++++
+# Azure Instance Metadata Service
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+
+The Azure Instance Metadata Service (IMDS) provides information about currently running virtual machine instances. You can use it to manage and configure your virtual machines.
+This information includes the SKU, storage, network configurations, and upcoming maintenance events. For a complete list of the data available, see the [Endpoint Categories Summary](#endpoint-categories).
+
+IMDS is available for running instances of virtual machines (VMs) and scale set instances. All endpoints support VMs created and managed by using [Azure Resource Manager](/rest/api/resources/). Only the Attested category and Network portion of the Instance category support VMs created by using the classic deployment model. The Attested endpoint does so only to a limited extent.
+
+IMDS is a REST API that's available at a well-known, non-routable IP address (`169.254.169.254`). You can only access it from within the VM. Communication between the VM and IMDS never leaves the host.
+Have your HTTP clients bypass web proxies within the VM when querying IMDS, and treat `169.254.169.254` the same as [`168.63.129.16`](../virtual-network/what-is-ip-address-168-63-129-16.md).
+
+## Usage
+
+### Access Azure Instance Metadata Service
+
+To access IMDS, create a VM from [Azure Resource Manager](/rest/api/resources/) or the [Azure portal](https://portal.azure.com), and use the following samples.
+For more examples, see [Azure Instance Metadata Samples](https://github.com/microsoft/azureimds).
+
+Here's sample code to retrieve all metadata for an instance. To access a specific data source, see [Endpoint Categories](#endpoint-categories) for an overview of all available features.
+
+**Request**
+
+> [!IMPORTANT]
+> This example bypasses proxies. You **must** bypass proxies when querying IMDS. See [Proxies](#proxies) for additional information.
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | ConvertTo-Json -Depth 64
+```
+
+`-NoProxy` requires PowerShell V6 or greater. See our [samples repository](https://github.com/microsoft/azureimds) for examples with older PowerShell versions.
+
+#### [Linux](#tab/linux/)
++
+```bash
+curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2021-02-01" | jq
+```
+
+The `jq` utility is available in many cases, but not all. If the `jq` utility is missing, use `| python -m json.tool` instead.
+++
+**Response**
+
+> [!NOTE]
+> The response is a JSON string. The following example response is pretty-printed for readability.
+++
+## Security and authentication
+
+The Instance Metadata Service is only accessible from within a running virtual machine instance on a non-routable IP address. VMs can only interact with their own metadata/functionality. The API is HTTP only and never leaves the host.
+
+In order to ensure that requests are directly intended for IMDS and prevent unintended or unwanted redirection of requests, requests:
+- **Must** contain the header `Metadata: true`
+- Must **not** contain an `X-Forwarded-For` header
+
+Any request that doesn't meet **both** of these requirements are rejected by the service.
+
+> [!IMPORTANT]
+> IMDS is **not** a channel for sensitive data. The API is unauthenticated and open to all processes on the VM. Information exposed through this service should be considered as shared information to all applications running inside the VM.
+
+If it isn't necessary for every process on the VM to access IMDS endpoint, you can set local firewall rules to limit the access.
+For example, if only a known system service needs to access instance metadata service, you can set a firewall rule on IMDS endpoint, only allowing the specific process(es) to access, or denying access for the rest of the processes.
++
+## Proxies
+
+IMDS is **not** intended to be used behind a proxy and doing so is unsupported. Most HTTP clients provide an option for you to disable proxies on your requests, and this functionality must be utilized when communicating with IMDS. Consult your client's documentation for details.
+
+> [!IMPORTANT]
+> Even if you don't know of any proxy configuration in your environment, **you still must override any default client proxy settings**. Proxy configurations can be automatically discovered, and failing to bypass such configurations exposes you to outage risks should the machine's configuration be changed in the future.
+
+## Rate limiting
+
+In general, requests to IMDS are limited to 5 requests per second (on a per VM basis). Requests exceeding this threshold will be rejected with 429 responses. Requests to the [Managed Identity](#managed-identity) category are limited to 20 requests per second and 5 concurrent requests (on a per VM basis).
+
+## HTTP verbs
+
+The following HTTP verbs are currently supported:
+
+| Verb | Description |
+||-|
+| `GET` | Retrieve the requested resource
+
+## Parameters
+
+Endpoints may support required and/or optional parameters. See [Schema](#schema) and the documentation for the specific endpoint in question for details.
+
+### Query parameters
+
+IMDS endpoints support HTTP query string parameters. For example:
+
+```
+http://169.254.169.254/metadata/instance/compute?api-version=2021-01-01&format=json
+```
+
+Specifies the parameters:
+
+| Name | Value |
+||-|
+| `api-version` | `2021-01-01`
+| `format` | `json`
+
+Requests with duplicate query parameter names will be rejected.
+
+### Route parameters
+
+For some endpoints that return larger json blobs, we support appending route parameters to the request endpoint to filter down to a subset of the response:
+
+```
+http://169.254.169.254/metadata/<endpoint>/[<filter parameter>/...]?<query parameters>
+```
+The parameters correspond to the indexes/keys that would be used to walk down the json object were you interacting with a parsed representation.
+
+For example, `/metatadata/instance` returns the json object:
+```json
+{
+ "compute": { ... },
+ "network": {
+ "interface": [
+ {
+ "ipv4": {
+ "ipAddress": [{
+ "privateIpAddress": "10.144.133.132",
+ "publicIpAddress": ""
+ }],
+ "subnet": [{
+ "address": "10.144.133.128",
+ "prefix": "26"
+ }]
+ },
+ "ipv6": {
+ "ipAddress": [
+ ]
+ },
+ "macAddress": "0011AAFFBB22"
+ },
+ ...
+ ]
+ }
+}
+```
+
+If we want to filter the response down to just the compute property, we would send the request:
+```
+http://169.254.169.254/metadata/instance/compute?api-version=<version>
+```
+
+Similarly, if we want to filter to a nested property or specific array element we keep appending keys:
+```
+http://169.254.169.254/metadata/instance/network/interface/0?api-version=<version>
+```
+would filter to the first element from the `Network.interface` property and return:
+
+```json
+{
+ "ipv4": {
+ "ipAddress": [{
+ "privateIpAddress": "10.144.133.132",
+ "publicIpAddress": ""
+ }],
+ "subnet": [{
+ "address": "10.144.133.128",
+ "prefix": "26"
+ }]
+ },
+ "ipv6": {
+ "ipAddress": [
+ ]
+ },
+ "macAddress": "0011AAFFBB22"
+}
+```
+
+> [!NOTE]
+> When filtering to a leaf node, `format=json` doesn't work. For these queries `format=text` needs to be explicitly specified since the default format is json.
+
+## Schema
+
+### Data format
+
+By default, IMDS returns data in JSON format (`Content-Type: application/json`). However, endpoints that support response filtering (see [Route Parameters](#route-parameters)) also support the format `text`.
+
+To access a non-default response format, specify the requested format as a query string parameter in the request. For example:
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance?api-version=2017-08-01&format=text"
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -s -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance?api-version=2017-08-01&format=text"
+```
+++
+In json responses, all primitives will be of type `string`, and missing or inapplicable values are always included but will be set to an empty string.
+
+### Versioning
+
+IMDS is versioned and specifying the API version in the HTTP request is mandatory. The only exception to this requirement is the [versions](#versions) endpoint, which can be used to dynamically retrieve the available API versions.
+
+As newer versions are added, older versions can still be accessed for compatibility if your scripts have dependencies on specific data formats.
+
+When you don't specify a version, you get an error with a list of the newest supported versions:
+
+```json
+{
+ "error": "Bad request. api-version was not specified in the request. For more information refer to aka.ms/azureimds",
+ "newest-versions": [
+ "2020-10-01",
+ "2020-09-01",
+ "2020-07-15"
+ ]
+}
+```
+
+#### Supported API versions
+- 2021-12-13
+- 2021-11-15
+- 2021-11-01
+- 2021-10-01
+- 2021-08-01
+- 2021-05-01
+- 2021-03-01
+- 2021-02-01
+- 2021-01-01
+- 2020-12-01
+- 2020-10-01
+- 2020-09-01
+- 2020-07-15
+- 2020-06-01
+- 2019-11-01
+- 2019-08-15
+- 2019-08-01
+- 2019-06-04
+- 2019-06-01
+- 2019-04-30
+- 2019-03-11
+- 2019-02-01
+- 2018-10-01
+- 2018-04-02
+- 2018-02-01
+- 2017-12-01
+- 2017-10-01
+- 2017-08-01
+- 2017-04-02
+- 2017-03-01
+
+### Swagger
+
+A full Swagger definition for IMDS is available at: https://github.com/Azure/azure-rest-api-specs/blob/main/specification/imds/data-plane/readme.md
+
+## Regional availability
+
+The service is **generally available** in all Azure clouds.
+
+## Root endpoint
+
+The root endpoint is `http://169.254.169.254/metadata`.
+
+## Endpoint categories
+
+The IMDS API contains multiple endpoint categories representing different data sources, each of which contains one or more endpoints. See each category for details.
+
+| Category root | Description | Version introduced |
+||-|--|
+| `/metadata/attested` | See [Attested Data](#attested-data) | 2018-10-01
+| `/metadatS](#managed-identity) | 2018-02-01
+| `/metadata/instance` | See [Instance Metadata](#instance-metadata) | 2017-04-02
+| `/metadatS](#load-balancer-metadata) | 2020-10-01
+| `/metadatS](#scheduled-events) | 2017-08-01
+| `/metadata/versions` | See [Versions](#versions) | N/A
+
+## Versions
+
+> [!NOTE]
+> This feature was released alongside version 2020-10-01, which is currently being rolled out and may not yet be available in every region.
+
+### List API versions
+
+Returns the set of supported API versions.
+
+```
+GET /metadata/versions
+```
+
+#### Parameters
+
+None (this endpoint is unversioned).
+
+#### Response
+
+```json
+{
+ "apiVersions": [
+ "2017-03-01",
+ "2017-04-02",
+ ...
+ ]
+}
+```
+
+## Instance metadata
+
+### Get VM metadata
+
+Exposes the important metadata for the VM instance, including compute, network, and storage.
+
+```
+GET /metadata/instance
+```
+
+#### Parameters
+
+| Name | Required/Optional | Description |
+||-|-|
+| `api-version` | Required | The version used to service the request.
+| `format` | Optional* | The format (`json` or `text`) of the response. *Note: May be required when using request parameters
+
+This endpoint supports response filtering via [route parameters](#route-parameters).
+
+#### Response
+++
+Schema breakdown:
+
+**Compute**
+
+| Data | Description | Version introduced |
+||-|--|
+| `azEnvironment` | Azure Environment where the VM is running in | 2018-10-01
+| `additionalCapabilities.hibernationEnabled` | Identifies if hibernation is enabled on the VM | 2021-11-01
+| `customData` | This feature is deprecated and disabled [in IMDS](#frequently-asked-questions). It has been superseded by `userData` | 2019-02-01
+| `evictionPolicy` | Sets how a [Spot VM](spot-vms.md) will be evicted. | 2020-12-01
+| `extendedLocation.type` | Type of the extended location of the VM. | 2021-03-01
+| `extendedLocation.name` | Name of the extended location of the VM | 2021-03-01
+| `host.id` | Name of the host of the VM. Note that a VM will either have a host or a hostGroup but not both. | 2021-11-15
+| `hostGroup.id` | Name of the hostGroup of the VM. Note that a VM will either have a host or a hostGroup but not both. | 2021-11-15
+| `isHostCompatibilityLayerVm` | Identifies if the VM runs on the Host Compatibility Layer | 2020-06-01
+| `licenseType` | Type of license for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit). This is only present for AHB-enabled VMs | 2020-09-01
+| `location` | Azure Region the VM is running in | 2017-04-02
+| `name` | Name of the VM | 2017-04-02
+| `offer` | Offer information for the VM image and is only present for images deployed from Azure image gallery | 2017-04-02
+| `osProfile.adminUsername` | Specifies the name of the admin account | 2020-07-15
+| `osProfile.computerName` | Specifies the name of the computer | 2020-07-15
+| `osProfile.disablePasswordAuthentication` | Specifies if password authentication is disabled. This is only present for Linux VMs | 2020-10-01
+| `osType` | Linux or Windows | 2017-04-02
+| `placementGroupId` | [Placement Group](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md) of your scale set | 2017-08-01
+| `plan` | [Plan](/rest/api/compute/virtualmachines/createorupdate#plan) containing name, product, and publisher for a VM if it's an Azure Marketplace Image | 2018-04-02
+| `platformUpdateDomain` | [Update domain](availability.md) the VM is running in | 2017-04-02
+| `platformFaultDomain` | [Fault domain](availability.md) the VM is running in | 2017-04-02
+| `platformSubFaultDomain` | Sub fault domain the VM is running in, if applicable. | 2021-10-01
+| `priority` | Priority of the VM. Refer to [Spot VMs](spot-vms.md) for more information | 2020-12-01
+| `provider` | Provider of the VM | 2018-10-01
+| `publicKeys` | [Collection of Public Keys](/rest/api/compute/virtualmachines/createorupdate#sshpublickey) assigned to the VM and paths | 2018-04-02
+| `publisher` | Publisher of the VM image | 2017-04-02
+| `resourceGroupName` | [Resource group](../azure-resource-manager/management/overview.md) for your Virtual Machine | 2017-08-01
+| `resourceId` | The [fully qualified](/rest/api/resources/resources/getbyid) ID of the resource | 2019-03-11
+| `sku` | Specific SKU for the VM image | 2017-04-02
+| `securityProfile.secureBootEnabled` | Identifies if UEFI secure boot is enabled on the VM | 2020-06-01
+| `securityProfile.virtualTpmEnabled` | Identifies if the virtual Trusted Platform Module (TPM) is enabled on the VM | 2020-06-01
+| `securityProfile.encryptionAtHost` | Identifies if [Encryption at Host](disks-enable-host-based-encryption-portal.md) is enabled on the VM | 2021-11-01
+| `securityProfile.securityType` | Identifies if the VM is a [Trusted VM](trusted-launch.md) or a [Confidential VM](../confidential-computing/confidential-vm-overview.md) | 2021-12-13
+| `storageProfile` | See Storage Profile below | 2019-06-01
+| `subscriptionId` | Azure subscription for the Virtual Machine | 2017-08-01
+| `tags` | [Tags](../azure-resource-manager/management/tag-resources.md) for your Virtual Machine | 2017-08-01
+| `tagsList` | Tags formatted as a JSON array for easier programmatic parsing | 2019-06-04
+| `userData` | The set of data specified when the VM was created for use during or after provisioning (Base64 encoded) | 2021-01-01
+| `version` | Version of the VM image | 2017-04-02
+| `virtualMachineScaleSet.id` | ID of the [Virtual Machine Scale Set created with flexible orchestration](flexible-virtual-machine-scale-sets.md) the Virtual Machine is part of. This field isn't available for Virtual Machine Scale Sets created with uniform orchestration. | 2021-03-01
+| `vmId` | [Unique identifier](https://azure.microsoft.com/blog/accessing-and-using-azure-vm-unique-id/) for the VM. The blog referenced only suits for VMs that have SMBIOS < 2.6. For VMs that have SMBIOS >= 2.6, the UUID from DMI is displayed in little-endian format, thus, there's no requirement to switch bytes. | 2017-04-02
+| `vmScaleSetName` | [Virtual Machine Scale Set Name](../virtual-machine-scale-sets/overview.md) of your scale set | 2017-12-01
+| `vmSize` | [VM size](sizes.md) | 2017-04-02
+| `zone` | [Availability Zone](../availability-zones/az-overview.md) of your virtual machine | 2017-12-01
+
+ΓÇá This version isn't fully available yet and may not be supported in all regions.
+
+**Storage profile**
+
+The storage profile of a VM is divided into three categories: image reference, OS disk, and data disks, plus an additional object for the local temporary disk.
+
+The image reference object contains the following information about the OS image:
+
+| Data | Description |
+||-|
+| `id` | Resource ID
+| `offer` | Offer of the platform or marketplace image
+| `publisher` | Image publisher
+| `sku` | Image sku
+| `version` | Version of the platform or marketplace image
+
+The OS disk object contains the following information about the OS disk used by the VM:
+
+| Data | Description |
+||-|
+| `caching` | Caching requirements
+| `createOption` | Information about how the VM was created
+| `diffDiskSettings` | Ephemeral disk settings
+| `diskSizeGB` | Size of the disk in GB
+| `image` | Source user image virtual hard disk
+| `managedDisk` | Managed disk parameters
+| `name` | Disk name
+| `vhd` | Virtual hard disk
+| `writeAcceleratorEnabled` | Whether or not writeAccelerator is enabled on the disk
+
+The data disks array contains a list of data disks attached to the VM. Each data disk object contains the following information:
+
+Data | Description | Version introduced |
+||--|--|
+| `bytesPerSecondThrottle`* | Disk read/write quota in bytes | 2021-05-01
+| `caching` | Caching requirements | 2019-06-01
+| `createOption` | Information about how the VM was created | 2019-06-01
+| `diffDiskSettings` | Ephemeral disk settings | 2019-06-01
+| `diskCapacityBytes`* | Size of disk in bytes | 2021-05-01
+| `diskSizeGB` | Size of the disk in GB | 2019-06-01
+| `encryptionSettings` | Encryption settings for the disk | 2019-06-01
+| `image` | Source user image virtual hard disk | 2019-06-01
+| `isSharedDisk`ΓÇáΓÇá | Identifies if the disk is shared between resources | 2021-05-01
+| `isUltraDisk` | Identifies if the data disk is an Ultra Disk | 2021-05-01
+| `lun` | Logical unit number of the disk | 2019-06-01
+| `managedDisk` | Managed disk parameters | 2019-06-01
+| `name` | Disk name | 2019-06-01
+| `opsPerSecondThrottle`* | Disk read/write quota in IOPS | 2021-05-01
+| `osType` | Type of OS included in the disk | 2019-06-01
+| `vhd` | Virtual hard disk | 2019-06-01
+| `writeAcceleratorEnabled` | Whether or not writeAccelerator is enabled on the disk | 2019-06-01
+
+ΓÇáΓÇá These fields are only populated for Ultra Disks; they are empty strings from non-Ultra Disks.
+
+The encryption settings blob contains data about how the disk is encrypted (if it's encrypted):
+
+Data | Description | Version introduced |
+||--|--|
+| `diskEncryptionKey.sourceVault.id` | The location of the disk encryption key | 2021-11-01
+| `diskEncryptionKey.secretUrl` | The location of the secret | 2021-11-01
+| `keyEncryptionKey.sourceVault.id` | The location of the key encryption key | 2021-11-01
+| `keyEncryptionKey.keyUrl` | The location of the key | 2021-11-01
++
+The resource disk object contains the size of the [Local Temp Disk](managed-disks-overview.md#temporary-disk) attached to the VM, if it has one, in kilobytes.
+If there's [no local temp disk for the VM](azure-vms-no-temp-disk.yml), this value is 0.
+
+| Data | Description | Version introduced |
+||-|--|
+| `resourceDisk.size` | Size of the local temp disk for the VM (in kB) | 2021-02-01
+
+**Network**
+
+| Data | Description | Version introduced |
+||-|--|
+| `ipv4.privateIpAddress` | Local IPv4 address of the VM | 2017-04-02
+| `ipv4.publicIpAddress` | Public IPv4 address of the VM | 2017-04-02
+| `subnet.address` | Subnet address of the VM | 2017-04-02
+| `subnet.prefix` | Subnet prefix, example 24 | 2017-04-02
+| `ipv6.ipAddress` | Local IPv6 address of the VM | 2017-04-02
+| `macAddress` | VM mac address | 2017-04-02
+
+> [!NOTE]
+> The nics returned by the network call are not guaranteed to be in order.
+
+### Get user data
+
+When creating a new VM, you can specify a set of data to be used during or after the VM provision, and retrieve it through IMDS. Check the end to end user data experience [here](user-data.md).
+
+To set up user data, utilize the quickstart template [here](https://aka.ms/ImdsUserDataArmTemplate). The sample below shows how to retrieve this data through IMDS. This feature is released with version `2021-01-01` and above.
+
+> [!NOTE]
+> Security notice: IMDS is open to all applications on the VM, sensitive data should not be placed in the user data.
++
+#### [Windows](#tab/windows/)
+
+```powershell
+$userData = Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text"
+[System.Text.Encoding]::UTF8.GetString([Convert]::FromBase64String($userData))
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text" | base64 --decode
+```
++++
+#### Sample 1: Tracking VM running on Azure
+
+As a service provider, you may require to track the number of VMs running your software or have agents that need to track uniqueness of the VM. To be able to get a unique ID for a VM, use the `vmId` field from Instance Metadata Service.
+
+**Request**
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/vmId?api-version=2017-08-01&format=text"
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/vmId?api-version=2017-08-01&format=text"
+```
+++
+**Response**
+
+```
+5c08b38e-4d57-4c23-ac45-aca61037f084
+```
+
+#### Sample 2: Placement of different data replicas
+
+For certain scenarios, placement of different data replicas is of prime importance. For example, [HDFS replica placement](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html#Replica_Placement:_The_First_Baby_Steps)
+or container placement via an [orchestrator](https://kubernetes.io/docs/user-guide/node-selection/) might require you to know the `platformFaultDomain` and `platformUpdateDomain` the VM is running on.
+You can also use [Availability Zones](../availability-zones/az-overview.md) for the instances to make these decisions.
+You can query this data directly via IMDS.
+
+**Request**
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/platformFaultDomain?api-version=2017-08-01&format=text"
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/platformFaultDomain?api-version=2017-08-01&format=text"
+```
+++
+**Response**
+
+```
+0
+```
+
+#### Sample 3: Get VM tags
+
+VM tags are included the instance API under instance/compute/tags endpoint.
+Tags may have been applied to your Azure VM to logically organize them into a taxonomy. The tags assigned to a VM can be retrieved by using the request below.
+
+**Request**
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/tags?api-version=2017-08-01&format=text"
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/tags?api-version=2017-08-01&format=text"
+```
+++
+**Response**
+
+```
+Department:IT;ReferenceNumber:123456;TestStatus:Pending
+```
+
+The `tags` field is a string with the tags delimited by semicolons. This output can be a problem if semicolons are used in the tags themselves. If a parser is written to programmatically extract the tags, you should rely on the `tagsList` field. The `tagsList` field is a JSON array with no delimiters, and consequently, easier to parse. The tagsList assigned to a VM can be retrieved by using the request below.
+
+**Request**
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/tagsList?api-version=2019-06-04" | ConvertTo-Json -Depth 64
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/tagsList?api-version=2019-06-04" | jq
+```
+
+The `jq` utility is available in many cases, but not all. If the `jq` utility is missing, use `| python -m json.tool` instead.
+++
+**Response**
+
+#### [Windows](#tab/windows/)
+
+```json
+{
+ "value": [
+ {
+ "name": "Department",
+ "value": "IT"
+ },
+ {
+ "name": "ReferenceNumber",
+ "value": "123456"
+ },
+ {
+ "name": "TestStatus",
+ "value": "Pending"
+ }
+ ],
+ "Count": 3
+}
+```
+
+#### [Linux](#tab/linux/)
+
+```json
+[
+ {
+ "name": "Department",
+ "value": "IT"
+ },
+ {
+ "name": "ReferenceNumber",
+ "value": "123456"
+ },
+ {
+ "name": "TestStatus",
+ "value": "Pending"
+ }
+]
+```
++++
+#### Sample 4: Get more information about the VM during support case
+
+As a service provider, you may get a support call where you would like to know more information about the VM. Asking the customer to share the compute metadata can provide basic information for the support professional to know about the kind of VM on Azure.
+
+**Request**
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute?api-version=2020-09-01" | ConvertTo-Json -Depth 64
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute?api-version=2020-09-01"
+```
+++
+**Response**
+
+> [!NOTE]
+> The response is a JSON string. The following example response is pretty-printed for readability.
+
+#### [Windows](#tab/windows/)
+```json
+{
+ "azEnvironment": "AZUREPUBLICCLOUD",
+ "extendedLocation": {
+ "type": "edgeZone",
+ "name": "microsoftlosangeles"
+ },
+ "evictionPolicy": "",
+ "additionalCapabilities": {
+ "hibernationEnabled": "false"
+ },
+ "hostGroup": {
+ "id": "testHostGroupId"
+ },
+ "isHostCompatibilityLayerVm": "true",
+ "licenseType": "Windows_Client",
+ "location": "westus",
+ "name": "examplevmname",
+ "offer": "WindowsServer",
+ "osProfile": {
+ "adminUsername": "admin",
+ "computerName": "examplevmname",
+ "disablePasswordAuthentication": "true"
+ },
+ "osType": "Windows",
+ "placementGroupId": "f67c14ab-e92c-408c-ae2d-da15866ec79a",
+ "plan": {
+ "name": "planName",
+ "product": "planProduct",
+ "publisher": "planPublisher"
+ },
+ "platformFaultDomain": "36",
+ "platformUpdateDomain": "42",
+ "priority": "Regular",
+ "publicKeys": [{
+ "keyData": "ssh-rsa 0",
+ "path": "/home/user/.ssh/authorized_keys0"
+ },
+ {
+ "keyData": "ssh-rsa 1",
+ "path": "/home/user/.ssh/authorized_keys1"
+ }
+ ],
+ "publisher": "RDFE-Test-Microsoft-Windows-Server-Group",
+ "resourceGroupName": "macikgo-test-may-23",
+ "resourceId": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/virtualMachines/examplevmname",
+ "securityProfile": {
+ "secureBootEnabled": "true",
+ "virtualTpmEnabled": "false",
+ "encryptionAtHost": "true",
+ "securityType": "TrustedLaunch"
+ },
+ "sku": "2019-Datacenter",
+ "storageProfile": {
+ "dataDisks": [{
+ "bytesPerSecondThrottle": "979202048",
+ "caching": "None",
+ "createOption": "Empty",
+ "diskCapacityBytes": "274877906944",
+ "diskSizeGB": "1024",
+ "image": {
+ "uri": ""
+ },
+ "isSharedDisk": "false",
+ "isUltraDisk": "true",
+ "lun": "0",
+ "managedDisk": {
+ "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/MicrosoftCompute/disks/exampledatadiskname",
+ "storageAccountType": "StandardSSD_LRS"
+ },
+ "name": "exampledatadiskname",
+ "opsPerSecondThrottle": "65280",
+ "vhd": {
+ "uri": ""
+ },
+ "writeAcceleratorEnabled": "false"
+ }],
+ "imageReference": {
+ "id": "",
+ "offer": "WindowsServer",
+ "publisher": "MicrosoftWindowsServer",
+ "sku": "2019-Datacenter",
+ "version": "latest"
+ },
+ "osDisk": {
+ "caching": "ReadWrite",
+ "createOption": "FromImage",
+ "diskSizeGB": "30",
+ "diffDiskSettings": {
+ "option": "Local"
+ },
+ "encryptionSettings": {
+ "enabled": "false",
+ "diskEncryptionKey": {
+ "sourceVault": {
+ "id": "/subscriptions/test-source-guid/resourceGroups/testrg/providers/Microsoft.KeyVault/vaults/test-kv"
+ },
+ "secretUrl": "https://test-disk.vault.azure.net/secrets/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
+ },
+ "keyEncryptionKey": {
+ "sourceVault": {
+ "id": "/subscriptions/test-key-guid/resourceGroups/testrg/providers/Microsoft.KeyVault/vaults/test-kv"
+ },
+ "keyUrl": "https://test-key.vault.azure.net/secrets/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
+ }
+ },
+ "image": {
+ "uri": ""
+ },
+ "managedDisk": {
+ "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/disks/exampleosdiskname",
+ "storageAccountType": "StandardSSD_LRS"
+ },
+ "name": "exampleosdiskname",
+ "osType": "Windows",
+ "vhd": {
+ "uri": ""
+ },
+ "writeAcceleratorEnabled": "false"
+ },
+ "resourceDisk": {
+ "size": "4096"
+ }
+ },
+ "subscriptionId": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
+ "tags": "baz:bash;foo:bar",
+ "version": "15.05.22",
+ "virtualMachineScaleSet": {
+ "id": "/subscriptions/xxxxxxxx-xxxxx-xxx-xxx-xxxx/resourceGroups/resource-group-name/providers/Microsoft.Compute/virtualMachineScaleSets/virtual-machine-scale-set-name"
+ },
+ "vmId": "02aab8a4-74ef-476e-8182-f6d2ba4166a6",
+ "vmScaleSetName": "crpteste9vflji9",
+ "vmSize": "Standard_A3",
+ "zone": ""
+}
+```
+
+#### [Linux](#tab/linux/)
+```json
+{
+ "azEnvironment": "AZUREPUBLICCLOUD",
+ "extendedLocation": {
+ "type": "edgeZone",
+ "name": "microsoftlosangeles"
+ },
+ "evictionPolicy": "",
+ "additionalCapabilities": {
+ "hibernationEnabled": "false"
+ },
+ "hostGroup": {
+ "id": "testHostGroupId"
+ },
+ "isHostCompatibilityLayerVm": "true",
+ "licenseType": "Windows_Client",
+ "location": "westus",
+ "name": "examplevmname",
+ "offer": "UbuntuServer",
+ "osProfile": {
+ "adminUsername": "admin",
+ "computerName": "examplevmname",
+ "disablePasswordAuthentication": "true"
+ },
+ "osType": "Linux",
+ "placementGroupId": "f67c14ab-e92c-408c-ae2d-da15866ec79a",
+ "plan": {
+ "name": "planName",
+ "product": "planProduct",
+ "publisher": "planPublisher"
+ },
+ "platformFaultDomain": "36",
+ "platformUpdateDomain": "42",
+ "Priority": "Regular",
+ "publicKeys": [{
+ "keyData": "ssh-rsa 0",
+ "path": "/home/user/.ssh/authorized_keys0"
+ },
+ {
+ "keyData": "ssh-rsa 1",
+ "path": "/home/user/.ssh/authorized_keys1"
+ }
+ ],
+ "publisher": "Canonical",
+ "resourceGroupName": "macikgo-test-may-23",
+ "resourceId": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/virtualMachines/examplevmname",
+ "securityProfile": {
+ "secureBootEnabled": "true",
+ "virtualTpmEnabled": "false",
+ "encryptionAtHost": "true",
+ "securityType": "TrustedLaunch"
+ },
+ "sku": "18.04-LTS",
+ "storageProfile": {
+ "dataDisks": [{
+ "bytesPerSecondThrottle": "979202048",
+ "caching": "None",
+ "createOption": "Empty",
+ "diskCapacityBytes": "274877906944",
+ "diskSizeGB": "1024",
+ "image": {
+ "uri": ""
+ },
+ "isSharedDisk": "false",
+ "isUltraDisk": "true",
+ "lun": "0",
+ "managedDisk": {
+ "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/disks/exampledatadiskname",
+ "storageAccountType": "StandardSSD_LRS"
+ },
+ "name": "exampledatadiskname",
+ "opsPerSecondThrottle": "65280",
+ "vhd": {
+ "uri": ""
+ },
+ "writeAcceleratorEnabled": "false"
+ }],
+ "imageReference": {
+ "id": "",
+ "offer": "UbuntuServer",
+ "publisher": "Canonical",
+ "sku": "16.04.0-LTS",
+ "version": "latest"
+ },
+ "osDisk": {
+ "caching": "ReadWrite",
+ "createOption": "FromImage",
+ "diskSizeGB": "30",
+ "diffDiskSettings": {
+ "option": "Local"
+ },
+ "encryptionSettings": {
+ "enabled": "false",
+ "diskEncryptionKey": {
+ "sourceVault": {
+ "id": "/subscriptions/test-source-guid/resourceGroups/testrg/providers/Microsoft.KeyVault/vaults/test-kv"
+ },
+ "secretUrl": "https://test-disk.vault.azure.net/secrets/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
+ },
+ "keyEncryptionKey": {
+ "sourceVault": {
+ "id": "/subscriptions/test-key-guid/resourceGroups/testrg/providers/Microsoft.KeyVault/vaults/test-kv"
+ },
+ "keyUrl": "https://test-key.vault.azure.net/secrets/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
+ }
+ },
+ "image": {
+ "uri": ""
+ },
+ "managedDisk": {
+ "id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-23/providers/Microsoft.Compute/disks/exampleosdiskname",
+ "storageAccountType": "StandardSSD_LRS"
+ },
+ "name": "exampleosdiskname",
+ "osType": "linux",
+ "vhd": {
+ "uri": ""
+ },
+ "writeAcceleratorEnabled": "false"
+ },
+ "resourceDisk": {
+ "size": "4096"
+ }
+ },
+ "subscriptionId": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
+ "tags": "baz:bash;foo:bar",
+ "version": "15.05.22",
+ "virtualMachineScaleSet": {
+ "id": "/subscriptions/xxxxxxxx-xxxxx-xxx-xxx-xxxx/resourceGroups/resource-group-name/providers/Microsoft.Compute/virtualMachineScaleSets/virtual-machine-scale-set-name"
+ },
+ "vmId": "02aab8a4-74ef-476e-8182-f6d2ba4166a6",
+ "vmScaleSetName": "crpteste9vflji9",
+ "vmSize": "Standard_A3",
+ "zone": ""
+}
+```
+++
+#### Sample 5: Get the Azure Environment where the VM is running
+
+Azure has various sovereign clouds like [Azure Government](https://azure.microsoft.com/overview/clouds/government/). Sometimes you need the Azure Environment to make some runtime decisions. The following sample shows you how you can achieve this behavior.
+
+**Request**
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/compute/azEnvironment?api-version=2018-10-01&format=text"
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/compute/azEnvironment?api-version=2018-10-01&format=text"
+```
+++
+**Response**
+
+```
+AzurePublicCloud
+```
+
+The cloud and the values of the Azure environment are listed here.
+
+| Cloud | Azure environment |
+|-|-|
+| [All generally available global Azure regions](https://azure.microsoft.com/regions/) | AzurePublicCloud
+| [Azure Government](https://azure.microsoft.com/overview/clouds/government/) | AzureUSGovernmentCloud
+| [Azure China 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | AzureChinaCloud
+| [Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | AzureGermanCloud
++
+#### Sample 6: Retrieve network information
+
+**Request**
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/network?api-version=2017-08-01" | ConvertTo-Json -Depth 64
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/network?api-version=2017-08-01"
+```
+++
+**Response**
+
+```json
+{
+ "interface": [
+ {
+ "ipv4": {
+ "ipAddress": [
+ {
+ "privateIpAddress": "10.1.0.4",
+ "publicIpAddress": "X.X.X.X"
+ }
+ ],
+ "subnet": [
+ {
+ "address": "10.1.0.0",
+ "prefix": "24"
+ }
+ ]
+ },
+ "ipv6": {
+ "ipAddress": []
+ },
+ "macAddress": "000D3AF806EC"
+ }
+ ]
+}
+```
+
+#### Sample 7: Retrieve public IP address
+
+#### [Windows](#tab/windows/)
+
+```powershell
+Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri "http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text"
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+curl -H Metadata:true --noproxy "*" "http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/publicIpAddress?api-version=2017-08-01&format=text"
+```
+++
+>[!NOTE]
+> * If you're looking to retrieve IMDS information for **Standard** SKU Public IP address, review [Load Balancer Metadata API](../load-balancer/howto-load-balancer-imds.md?tabs=windows) for more infomration.
+
+## Attested data
+
+### Get Attested data
+
+IMDS helps to provide guarantees that the data provided is coming from Azure. Microsoft signs part of this information, so you can confirm that an image in Azure Marketplace is the one you're running on Azure.
+
+```
+GET /metadata/attested/document
+```
+
+#### Parameters
+
+| Name | Required/Optional | Description |
+||-|-|
+| `api-version` | Required | The version used to service the request.
+| `nonce` | Optional | A 10-digit string that serves as a cryptographic nonce. If no value is provided, IMDS uses the current UTC timestamp.
+
+#### Response
+
+```json
+{
+ "encoding":"pkcs7",
+ "signature":"MIIEEgYJKoZIhvcNAQcCoIIEAzCCAgMBAAGjYDBeMFwGA1UdAQRVMFOAENnYkHLa04Ut4Mpt7TkJFfyhLTArMSkwJwYDVQQDEyB0ZXN0c3ViZG9tYWluLm1ldGFkYXRhLmF6dXJlLmNvbYIQZ8VuSofHbJRAQNBNpiASdDANBgkqhkiG9w0BAQQFAAOBgQCLSM6aX5Bs1KHCJp4VQtxZPzXF71rVKCocHy3N9PTJQ9Fpnd+bYw2vSpQHg/AiG82WuDFpPReJvr7Pa938mZqW9HUOGjQKK2FYDTg6fXD8pkPdyghlX5boGWAMMrf7bFkup+lsT+n2tRw2wbNknO1tQ0wICtqy2VqzWwLi45RBwTGB6DCB5QIBATA/MCsxKTAnBgNVBAMTIHRlc3RzdWJkb21haW4ubWV0YWRhdGEuYXp1cmUuY29tAhBnxW5Kh8dslEBA0E2mIBJ0MA0GCSqGSIb3DQEBCwUAMA0GCSqGSIb3DQEBAQUABIGAld1BM/yYIqqv8SDE4kjQo3Ul/IKAVR8ETKcve5BAdGSNkTUooUGVniTXeuvDj5NkmazOaKZp9fEtByqqPOyw/nlXaZgOO44HDGiPUJ90xVYmfeK6p9RpJBu6kiKhnnYTelUk5u75phe5ZbMZfBhuPhXmYAdjc7Nmw97nx8NnprQ="
+}
+```
+
+The signature blob is a [pkcs7](https://aka.ms/pkcs7)-signed version of document. It contains the certificate used for signing along with certain VM-specific details.
+
+For VMs created by using Azure Resource Manager, the document includes `vmId`, `sku`, `nonce`, `subscriptionId`, `timeStamp` for creation and expiry of the document, and the plan information about the image. The plan information is only populated for Azure Marketplace images.
+
+For VMs created by using the classic deployment model, only the `vmId` and `subscriptionId` are guaranteed to be populated. You can extract the certificate from the response, and use it to confirm that the response is valid and is coming from Azure.
+
+The decoded document contains the following fields:
+
+| Data | Description | Version introduced |
+||-|--|
+| `licenseType` | Type of license for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit). This is only present for AHB-enabled VMs. | 2020-09-01
+| `nonce` | A string that can be optionally provided with the request. If no `nonce` was supplied, the current Coordinated Universal Time timestamp is used. | 2018-10-01
+| `plan` | The [Azure Marketplace Image plan](/rest/api/compute/virtualmachines/createorupdate#plan). Contains the plan ID (name), product image or offer (product), and publisher ID (publisher). | 2018-10-01
+| `timestamp.createdOn` | The UTC timestamp for when the signed document was created | 2018-20-01
+| `timestamp.expiresOn` | The UTC timestamp for when the signed document expires | 2018-10-01
+| `vmId` | [Unique identifier](https://azure.microsoft.com/blog/accessing-and-using-azure-vm-unique-id/) for the VM | 2018-10-01
+| `subscriptionId` | Azure subscription for the Virtual Machine | 2019-04-30
+| `sku` | Specific SKU for the VM image (correlates to `compute/sku` property from the Instance Metadata endpoint \[`/metadata/instance`\]) | 2019-11-01
+
+> [!NOTE]
+> For Classic (non-Azure Resource Manager) VMs, only the vmId is guaranteed to be populated.
+
+Example document:
+```json
+{
+ "nonce":"20201130-211924",
+ "plan":{
+ "name":"planName",
+ "product":"planProduct",
+ "publisher":"planPublisher"
+ },
+ "sku":"Windows-Server-2012-R2-Datacenter",
+ "subscriptionId":"8d10da13-8125-4ba9-a717-bf7490507b3d",
+ "timeStamp":{
+ "createdOn":"11/30/20 21:19:19 -0000",
+ "expiresOn":"11/30/20 21:19:24 -0000"
+ },
+ "vmId":"02aab8a4-74ef-476e-8182-f6d2ba4166a6"
+}
+```
+
+#### Sample 1: Validate that the VM is running in Azure
+
+Vendors in Azure Marketplace want to ensure that their software is licensed to run only in Azure. If someone copies the VHD to an on-premises environment, the vendor needs to be able to detect that. Through IMDS, these vendors can get signed data that guarantees response only from Azure.
+
+> [!NOTE]
+> This sample requires the jq utility to be installed.
+
+**Validation**
+
+#### [Windows](#tab/windows/)
+
+```powershell
+# Get the signature
+$attestedDoc = Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri http://169.254.169.254/metadata/attested/document?api-version=2020-09-01
+# Decode the signature
+$signature = [System.Convert]::FromBase64String($attestedDoc.signature)
+```
+
+Verify that the signature is from Microsoft Azure and checks the certificate chain for errors.
+
+```powershell
+# Get certificate chain
+$cert = [System.Security.Cryptography.X509Certificates.X509Certificate2]($signature)
+$chain = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Chain
+$chain.Build($cert)
+# Print the Subject of each certificate in the chain
+foreach($element in $chain.ChainElements)
+{
+ Write-Host $element.Certificate.Subject
+}
+
+# Get the content of the signed document
+Add-Type -AssemblyName System.Security
+$signedCms = New-Object -TypeName System.Security.Cryptography.Pkcs.SignedCms
+$signedCms.Decode($signature);
+$content = [System.Text.Encoding]::UTF8.GetString($signedCms.ContentInfo.Content)
+Write-Host "Attested data: " $content
+$json = $content | ConvertFrom-Json
+# Do additional validation here
+```
+
+#### [Linux](#tab/linux/)
+
+```bash
+# Get the signature
+curl --silent -H Metadata:True --noproxy "*" "http://169.254.169.254/metadata/attested/document?api-version=2020-09-01" | jq -r '.["signature"]' > signature
+# Decode the signature
+base64 -d signature > decodedsignature
+# Get PKCS7 format
+openssl pkcs7 -in decodedsignature -inform DER -out sign.pk7
+# Get Public key out of pkc7
+openssl pkcs7 -in decodedsignature -inform DER -print_certs -out signer.pem
+# Get the intermediate certificate
+curl -s -o intermediate.cer "$(openssl x509 -in signer.pem -text -noout | grep " CA Issuers -" | awk -FURI: '{print $2}')"
+openssl x509 -inform der -in intermediate.cer -out intermediate.pem
+# Verify the contents
+openssl smime -verify -in sign.pk7 -inform pem -noverify
+```
+
+**Results**
+
+```json
+Verification successful
+{
+ "nonce": "20181128-001617",
+ "plan":
+ {
+ "name": "",
+ "product": "",
+ "publisher": ""
+ },
+ "timeStamp":
+ {
+ "createdOn": "11/28/18 00:16:17 -0000",
+ "expiresOn": "11/28/18 06:16:17 -0000"
+ },
+ "vmId": "d3e0e374-fda6-4649-bbc9-7f20dc379f34",
+ "licenseType": "Windows_Client",
+ "subscriptionId": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
+ "sku": "RS3-Pro"
+}
+```
+
+Verify that the signature is from Microsoft Azure, and check the certificate chain for errors.
+
+```bash
+# Verify the subject name for the main certificate
+openssl x509 -noout -subject -in signer.pem
+# Verify the issuer for the main certificate
+openssl x509 -noout -issuer -in signer.pem
+#Validate the subject name for intermediate certificate
+openssl x509 -noout -subject -in intermediate.pem
+# Verify the issuer for the intermediate certificate
+openssl x509 -noout -issuer -in intermediate.pem
+# Verify the certificate chain, for Azure China 21Vianet the intermediate certificate will be from DigiCert Global Root CA
+openssl verify -verbose -CAfile /etc/ssl/certs/DigiCert_Global_Root.pem -untrusted intermediate.pem signer.pem
+```
+++
+The `nonce` in the signed document can be compared if you provided a `nonce` parameter in the initial request.
+
+> [!NOTE]
+> The certificate for the public cloud and each sovereign cloud will be different.
+
+| Cloud | Certificate |
+|-|-|
+| [All generally available global Azure regions](https://azure.microsoft.com/regions/) | *.metadata.azure.com
+| [Azure Government](https://azure.microsoft.com/overview/clouds/government/) | *.metadata.azure.us
+| [Azure China 21Vianet](https://azure.microsoft.com/global-infrastructure/china/) | *.metadata.azure.cn
+| [Azure Germany](https://azure.microsoft.com/overview/clouds/germany/) | *.metadata.microsoftazure.de
+
+> [!NOTE]
+> The certificates might not have an exact match of `metadata.azure.com` for the public cloud. For this reason, the certification validation should allow a common name from any `.metadata.azure.com` subdomain.
+
+In cases where the intermediate certificate can't be downloaded due to network constraints during validation, you can pin the intermediate certificate. Azure rolls over the certificates, which is standard PKI practice. You must update the pinned certificates when rollover happens. Whenever a change to update the intermediate certificate is planned, the Azure blog is updated, and Azure customers are notified.
+
+You can find the intermediate certificates on [this page](../security/fundamentals/azure-CA-details.md). The intermediate certificates for each of the regions can be different.
+
+> [!NOTE]
+> The intermediate certificate for Azure China 21Vianet will be from DigiCert Global Root CA, instead of Baltimore.
+If you pinned the intermediate certificates for Azure China as part of a root chain authority change, the intermediate certificates must be updated.
+
+> [!NOTE]
+> Starting February 2022, our Attested Data certificates will be impacted by a TLS change. Due to this, the root CA will change from Baltimore CyberTrust to DigiCert Global G2 only for Public and US Government clouds. If you have the Baltimore CyberTrust cert or other intermediate certificates listed in **[this post](https://techcommunity.microsoft.com/t5/azure-governance-and-management/azure-instance-metadata-service-attested-data-tls-critical/ba-p/2888953)** pinned, please follow the instructions listed there **immediately** to prevent any disruptions from using the Attested Data endpoint.
+
+## Managed identity
+
+A managed identity, assigned by the system, can be enabled on the VM. You can also assign one or more user-assigned managed identities to the VM.
+You can then request tokens for managed identities from IMDS. Use these tokens to authenticate with other Azure services, such as Azure Key Vault.
+
+For detailed steps to enable this feature, see [Acquire an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md).
+
+## Load Balancer Metadata
+When you place virtual machine or virtual machine set instances behind an Azure Standard Load Balancer, you can use IMDS to retrieve metadata related to the load balancer and the instances. For more information, see [Retrieve load balancer information](../load-balancer/instance-metadata-service-load-balancer.md).
+
+## Scheduled events
+You can obtain the status of the scheduled events by using IMDS. Then the user can specify a set of actions to run upon these events. For more information, see [Scheduled events for Linux](./linux/scheduled-events.md) or [Scheduled events for Windows](./windows/scheduled-events.md).
++
+## Sample code in different languages
+
+The following table lists samples of calling IMDS by using different languages inside the VM:
+
+| Language | Example |
+|-||
+| Bash | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.sh
+| C# | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.cs
+| Go | https://github.com/Microsoft/azureimds/blob/master/imdssample.go
+| Java | https://github.com/Microsoft/azureimds/blob/master/imdssample.java
+| NodeJS | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.js
+| Perl | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.pl
+| PowerShell | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.ps1
+| Puppet | https://github.com/keirans/azuremetadata
+| Python | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.py
+| Ruby | https://github.com/Microsoft/azureimds/blob/master/IMDSSample.rb
+
+## Errors and debugging
+
+If there's a data element not found or a malformed request, the Instance Metadata Service returns standard HTTP errors. For example:
+
+| HTTP status code | Reason |
+||--|
+| `200 OK` | The request was successful.
+| `400 Bad Request` | Missing `Metadata: true` header or missing parameter `format=json` when querying a leaf node
+| `404 Not Found` | The requested element doesn't exist
+| `405 Method Not Allowed` | The HTTP method (verb) isn't supported on the endpoint.
+| `410 Gone` | Retry after some time for a max of 70 seconds
+| `429 Too Many Requests` | API [Rate Limits](#rate-limiting) have been exceeded
+| `500 Service Error` | Retry after some time
+
+## Frequently asked questions
+
+- I'm getting the error `400 Bad Request, Required metadata header not specified`. What does this mean?
+ - IMDS requires the header `Metadata: true` to be passed in the request. Passing this header in the REST call allows access to IMDS.
+
+- Why am I not getting compute information for my VM?
+ - Currently, IMDS only supports instances created with Azure Resource Manager.
+
+- I created my VM through Azure Resource Manager some time ago. Why am I not seeing compute metadata information?
+ - If you created your VM after September 2016, add a [tag](../azure-resource-manager/management/tag-resources.md) to start seeing compute metadata. If you created your VM before September 2016, add or remove extensions or data disks to the VM instance to refresh metadata.
+
+- Is user data the same as custom data?
+ - User data offers the similar functionality to custom data, allowing you to pass your own metadata to the VM instance. The difference is, user data is retrieved through IMDS, and is persistent throughout the lifetime of the VM instance. Existing custom data feature will continue to work as described in [this article](custom-data.md). However you can only get custom data through local system folder, not through IMDS.
+
+- Why am I not seeing all data populated for a new version?
+ - If you created your VM after September 2016, add a [tag](../azure-resource-manager/management/tag-resources.md) to start seeing compute metadata. If you created your VM before September 2016, add or remove extensions or data disks to the VM instance to refresh metadata.
+
+- Why am I getting the error `500 Internal Server Error` or `410 Resource Gone`?
+ - Retry your request. For more information, see [Transient fault handling](/azure/architecture/best-practices/transient-faults). If the problem persists, create a support issue in the Azure portal for the VM.
+
+- Would this work for scale set instances?
+ - Yes, IMDS is available for scale set instances.
+
+- I updated my tags in my scale sets, but they don't appear in the instances (unlike single instance VMs). Am I doing something wrong?
+ - Currently tags for scale sets only show to the VM on a reboot, reimage, or disk change to the instance.
+
+- Why am I'm not seeing the SKU information for my VM in `instance/compute` details?
+ - For custom images created from Azure Marketplace, Azure platform doesn't retain the SKU information for the custom image and the details for any VMs created from the custom image. This is by design and hence not surfaced in the VM `instance/compute` details.
+
+- Why is my request timed out for my call to the service?
+ - Metadata calls must be made from the primary IP address assigned to the primary network card of the VM. Additionally, if you've changed your routes, there must be a route for the 169.254.169.254/32 address in your VM's local routing table.
+
+ ### [Windows](#tab/windows/)
+
+ 1. Dump your local routing table and look for the IMDS entry. For example:
+ ```console
+ > route print
+ IPv4 Route Table
+ ===========================================================================
+ Active Routes:
+ Network Destination Netmask Gateway Interface Metric
+ 0.0.0.0 0.0.0.0 172.16.69.1 172.16.69.7 10
+ 127.0.0.0 255.0.0.0 On-link 127.0.0.1 331
+ 127.0.0.1 255.255.255.255 On-link 127.0.0.1 331
+ 127.255.255.255 255.255.255.255 On-link 127.0.0.1 331
+ 168.63.129.16 255.255.255.255 172.16.69.1 172.16.69.7 11
+ 169.254.169.254 255.255.255.255 172.16.69.1 172.16.69.7 11
+ ... (continues) ...
+ ```
+ 1. Verify that a route exists for `169.254.169.254`, and note the corresponding network interface (for example, `172.16.69.7`).
+ 1. Dump the interface configuration and find the interface that corresponds to the one referenced in the routing table, noting the MAC (physical) address.
+ ```console
+ > ipconfig /all
+ ... (continues) ...
+ Ethernet adapter Ethernet:
+
+ Connection-specific DNS Suffix . : xic3mnxjiefupcwr1mcs1rjiqa.cx.internal.cloudapp.net
+ Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter
+ Physical Address. . . . . . . . . : 00-0D-3A-E5-1C-C0
+ DHCP Enabled. . . . . . . . . . . : Yes
+ Autoconfiguration Enabled . . . . : Yes
+ Link-local IPv6 Address . . . . . : fe80::3166:ce5a:2bd5:a6d1%3(Preferred)
+ IPv4 Address. . . . . . . . . . . : 172.16.69.7(Preferred)
+ Subnet Mask . . . . . . . . . . . : 255.255.255.0
+ ... (continues) ...
+ ```
+ 1. Confirm that the interface corresponds to the VM's primary NIC and primary IP. You can find the primary NIC and IP by looking at the network configuration in the Azure portal, or by looking it up with the Azure CLI. Note the private IPs (and the MAC address if you're using the CLI). Here's a PowerShell CLI example:
+ ```powershell
+ $ResourceGroup = '<Resource_Group>'
+ $VmName = '<VM_Name>'
+ $NicNames = az vm nic list --resource-group $ResourceGroup --vm-name $VmName | ConvertFrom-Json | Foreach-Object { $_.id.Split('/')[-1] }
+ foreach($NicName in $NicNames)
+ {
+ $Nic = az vm nic show --resource-group $ResourceGroup --vm-name $VmName --nic $NicName | ConvertFrom-Json
+ Write-Host $NicName, $Nic.primary, $Nic.macAddress
+ }
+ # Output: wintest767 True 00-0D-3A-E5-1C-C0
+ ```
+ 1. If they don't match, update the routing table so that the primary NIC and IP are targeted.
+
+ ### [Linux](#tab/linux/)
+
+ 1. Dump your local routing table with a command such as `netstat -r` and look for the IMDS entry (e.g.):
+ ```console
+ ~$ netstat -r
+ Kernel IP routing table
+ Destination Gateway Genmask Flags MSS Window irtt Iface
+ default _gateway 0.0.0.0 UG 0 0 0 eth0
+ 168.63.129.16 _gateway 255.255.255.255 UGH 0 0 0 eth0
+ 169.254.169.254 _gateway 255.255.255.255 UGH 0 0 0 eth0
+ 172.16.69.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
+ ```
+ 1. Verify that a route exists for `169.254.169.254`, and note the corresponding network interface (e.g. `eth0`).
+ 1. Dump the interface configuration for the corresponding interface in the routing table (note the exact name of the configuration file may vary)
+ ```console
+ ~$ cat /etc/netplan/50-cloud-init.yaml
+ network:
+ ethernets:
+ eth0:
+ dhcp4: true
+ dhcp4-overrides:
+ route-metric: 100
+ dhcp6: false
+ match:
+ macaddress: 00:0d:3a:e4:c7:2e
+ set-name: eth0
+ version: 2
+ ```
+ 1. If you're using a dynamic IP, note the MAC address. If you're using a static IP, you may note the listed IP(s) and/or the MAC address.
+ 1. Confirm that the interface corresponds to the VM's primary NIC and primary IP. You can find the primary NIC and IP by looking at the network configuration in the Azure portal, or by looking it up with the Azure CLI. Note the private IPs (and the MAC address if you're using the CLI). Here's a PowerShell CLI example:
+ ```powershell
+ $ResourceGroup = '<Resource_Group>'
+ $VmName = '<VM_Name>'
+ $NicNames = az vm nic list --resource-group $ResourceGroup --vm-name $VmName | ConvertFrom-Json | Foreach-Object { $_.id.Split('/')[-1] }
+ foreach($NicName in $NicNames)
+ {
+ $Nic = az vm nic show --resource-group $ResourceGroup --vm-name $VmName --nic $NicName | ConvertFrom-Json
+ Write-Host $NicName, $Nic.primary, $Nic.macAddress
+ }
+ # Output: ipexample606 True 00-0D-3A-E4-C7-2E
+ ```
+ 1. If they don't match, update the routing table such that the primary NIC/IP are targeted.
+
+
+
+- Fail over clustering in Windows Server
+ - When you're querying IMDS with failover clustering, it's sometimes necessary to add a route to the routing table. Here's how:
+
+ 1. Open a command prompt with administrator privileges.
+
+ 1. Run the following command, and note the address of the Interface for Network Destination (`0.0.0.0`) in the IPv4 Route Table.
+
+ ```bat
+ route print
+ ```
+
+ > [!NOTE]
+ > The following example output is from a Windows Server VM with failover cluster enabled. For simplicity, the output contains only the IPv4 Route Table.
+
+ ```
+ IPv4 Route Table
+ ===========================================================================
+ Active Routes:
+ Network Destination Netmask Gateway Interface Metric
+ 0.0.0.0 0.0.0.0 10.0.1.1 10.0.1.10 266
+ 10.0.1.0 255.255.255.192 On-link 10.0.1.10 266
+ 10.0.1.10 255.255.255.255 On-link 10.0.1.10 266
+ 10.0.1.15 255.255.255.255 On-link 10.0.1.10 266
+ 10.0.1.63 255.255.255.255 On-link 10.0.1.10 266
+ 127.0.0.0 255.0.0.0 On-link 127.0.0.1 331
+ 127.0.0.1 255.255.255.255 On-link 127.0.0.1 331
+ 127.255.255.255 255.255.255.255 On-link 127.0.0.1 331
+ 169.254.0.0 255.255.0.0 On-link 169.254.1.156 271
+ 169.254.1.156 255.255.255.255 On-link 169.254.1.156 271
+ 169.254.255.255 255.255.255.255 On-link 169.254.1.156 271
+ 224.0.0.0 240.0.0.0 On-link 127.0.0.1 331
+ 224.0.0.0 240.0.0.0 On-link 169.254.1.156 271
+ 255.255.255.255 255.255.255.255 On-link 127.0.0.1 331
+ 255.255.255.255 255.255.255.255 On-link 169.254.1.156 271
+ 255.255.255.255 255.255.255.255 On-link 10.0.1.10 266
+ ```
+
+ Run the following command and use the address of the Interface for Network Destination (`0.0.0.0`), which is (`10.0.1.10`) in this example.
+
+ ```bat
+ route add 169.254.169.254/32 10.0.1.10 metric 1 -p
+ ```
+
+## Support
+
+If you'ren't able to get a metadata response after multiple attempts, you can create a support issue in the Azure portal.
+
+## Product feedback
+
+You can provide product feedback and ideas to our user feedback channel under Virtual Machines > Instance Metadata Service [here](https://feedback.azure.com/d365community/forum/ec2f1827-be25-ec11-b6e6-000d3a4f0f1c?c=a60ebac8-c125-ec11-b6e6-000d3a4f0f1c)
+
+## Next steps
+
+- [Acquire an access token for the VM](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md)
+
+- [Scheduled events for Linux](./linux/scheduled-events.md)
+
+- [Scheduled events for Windows](./windows/scheduled-events.md)
+
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/instance-metadata-service.md
- Title: Azure Instance Metadata Service for Linux
-description: Learn about the Azure Instance Metadata Service and how it provides information about currently running virtual machine instances in Linux.
------ Previously updated : 04/16/2021----
-# Azure Instance Metadata Service (Linux)
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
virtual-machines Reserved Vm Instance Size Flexibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/reserved-vm-instance-size-flexibility.md
Last updated 04/06/2021
When you buy a Reserved VM Instance, you can choose to optimize for instance size flexibility or capacity priority. For more information about setting or changing the optimize setting for reserved VM instances, see [Change the optimize setting for reserved VM instances](../cost-management-billing/reservations/manage-reserved-vm-instance.md#change-optimize-setting-for-reserved-vm-instances).
-With a reserved virtual machine instance that's optimized for instance size flexibility, the reservation you buy can apply to the virtual machines (VMs) sizes in the same instance size flexibility group. For example, if you buy a reservation for a VM size that's listed in the DSv2 Series, like Standard_DS3_v2, the reservation discount can apply to the other sizes that are listed in that same instance size flexibility group:
+With a reserved virtual machine instance that's optimized for instance size flexibility, the reservation you buy can apply to the virtual machines (VMs) sizes in the same instance size flexibility group. In other words, when you buy a reserved VM instance of any size within an instance flexibility group, the instance applies to all sizes within the group. For example, if you buy a reservation for a VM size that's listed in the DSv2 Series, like Standard_DS3_v2, the reservation discount can apply to the other sizes that are listed in that same instance size flexibility group:
- Standard_DS1_v2 - Standard_DS2_v2
virtual-machines Image Builder Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder-powershell.md
If you don't have an Azure subscription, [create a free account](https://azure.m
If you choose to use PowerShell locally, this article requires that you install the Azure PowerShell module and connect to your Azure account by using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet. For more information, see [Install Azure PowerShell](/powershell/azure/install-az-ps).
+Some of the steps require cmdlets from the [Az.ImageBuilder](https://www.powershellgallery.com/packages/Az.ImageBuilder) module. Install separately by using the following command.
+
+```azurepowershell-interactive
+Install-Module -Name Az.ImageBuilder
+```
+ [!INCLUDE [cloud-shell-try-it](../../../includes/cloud-shell-try-it.md)] If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources
virtual-machines Instance Metadata Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/instance-metadata-service.md
- Title: Azure Instance Metadata Service for Windows
-description: Learn about the Azure Instance Metadata Service and how it provides information about currently running virtual machine instances in Windows.
------- Previously updated : 04/16/2021----
-# Azure Instance Metadata Service (Windows)
-
-**Applies to:** :heavy_check_mark: Windows VMs
-
virtual-machines Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-cli.md
- Previously updated : 08/09/2021 Last updated : 02/23/2023
To open the Cloud Shell, just select **Try it** from the upper right corner of a
## Create a resource group
-Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *eastus* location:
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *myResourceGroup* in the *West US 3* location. Replace the value of the variables as needed.
```azurecli-interactive
-az group create --name myResourceGroup --location eastus
+resourcegroup="myResourceGroupCLI"
+location="westus3"
+az group create --name $resourcegroup --location $location
``` ## Create virtual machine
-Create a VM with [az vm create](/cli/azure/vm). The following example creates a VM named *myVM*. This example uses *azureuser* for an administrative user name.
+Create a VM with [az vm create](/cli/azure/vm). The following example creates a VM named *myVM*. This example uses *azureuser* for an administrative user name. Replace the values of the variables as needed.
-You will need to supply a password that meets the [password requirements for Azure VMs](./faq.yml#what-are-the-password-requirements-when-creating-a-vm-
-).
+You'll be prompted to supply a password that meets the [password requirements for Azure VMs](./faq.yml#what-are-the-password-requirements-when-creating-a-vm-
+).
-Using the example below, you will be prompted to enter a password at the command line. You could also add the the `--admin-password` parameter with a value for your password. The user name and password will be used later, when you connect to the VM.
+Using the example below, you'll be prompted to enter a password at the command line. You could also add the `--admin-password` parameter with a value for your password. The user name and password will be used when you connect to the VM.
```azurecli-interactive
+vmname="myVM"
+username="azureuser"
az vm create \
- --resource-group myResourceGroup \
- --name myVM \
+ --resource-group $resourcegroup \
+ --name $vmname \
--image Win2022AzureEditionCore \ --public-ip-sku Standard \
- --admin-username azureuser
+ --admin-username $username
``` It takes a few minutes to create the VM and supporting resources. The following example output shows the VM create operation was successful.
It takes a few minutes to create the VM and supporting resources. The following
{ "fqdns": "", "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
- "location": "eastus",
+ "location": "westus3",
"macAddress": "00-0D-3A-23-9A-49", "powerState": "VM running", "privateIpAddress": "10.0.0.4", "publicIpAddress": "52.174.34.95",
- "resourceGroup": "myResourceGroup"
+ "resourceGroup": "myResourceGroupCLI"
+ "zones": ""
} ```
-Note your own `publicIpAddress` in the output from your VM. This address is used to access the VM in the next steps.
+Take a note your own `publicIpAddress` in the output when you create your VM. This IP address is used to access the VM later in this article.
## Install web server To see your VM in action, install the IIS web server. ```azurecli-interactive
-az vm run-command invoke -g MyResourceGroup -n MyVm --command-id RunPowerShellScript --scripts "Install-WindowsFeature -name Web-Server -IncludeManagementTools"
+az vm run-command invoke -g $resourcegroup \
+ -n $vmname \
+ --command-id RunPowerShellScript \
+ --scripts "Install-WindowsFeature -name Web-Server -IncludeManagementTools"
``` ## Open port 80 for web traffic
az vm run-command invoke -g MyResourceGroup -n MyVm --command-id RunPowerShellSc
By default, only RDP connections are opened when you create a Windows VM in Azure. Use [az vm open-port](/cli/azure/vm) to open TCP port 80 for use with the IIS web server: ```azurecli-interactive
-az vm open-port --port 80 --resource-group myResourceGroup --name myVM
+az vm open-port --port 80 --resource-group $resourcegroup --name $vmname
``` ## View the web server in action
With IIS installed and port 80 now open on your VM from the Internet, use a web
When no longer needed, you can use the [az group delete](/cli/azure/group) command to remove the resource group, VM, and all related resources: ```azurecli-interactive
-az group delete --name myResourceGroup
+az group delete --name $resourcegroup
``` ## Next steps
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureSignalR** | Azure SignalR. | Outbound | No | No | | **AzureSiteRecovery** | Azure Site Recovery.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory**, **AzureKeyVault**, **EventHub**,**GuestAndHybridManagement** and **Storage** tags. | Outbound | No | No | | **AzureSphere** | This tag or the IP addresses covered by this tag can be used to restrict access to Azure Sphere Security Services. | Both | No | Yes |
+| **AzureSpringCloud** | Allow traffic to applications hosted in Azure Spring Apps. | Outbound | No | Yes |
| **AzureStack** | Azure Stack Bridge services. <br/> This tag represents the Azure Stack Bridge service endpoint per region. | Outbound | No | Yes | | **AzureTrafficManager** | Azure Traffic Manager probe IP addresses.<br/><br/>For more information on Traffic Manager probe IP addresses, see [Azure Traffic Manager FAQ](../traffic-manager/traffic-manager-faqs.md). | Inbound | No | Yes | | **AzureUpdateDelivery** | For accessing Windows Updates. <br/><br/>**Note**: This tag provides access to Windows Update metadata services. To successfully download updates, you must also enable the **AzureFrontDoor.FirstParty** service tag and configure outbound security rules with the protocol and port defined as follows: <ul><li>AzureUpdateDelivery: TCP, port 443</li><li>AzureFrontDoor.FirstParty: TCP, port 80</li></ul> | Outbound | No | No | | **AzureWebPubSub** | AzureWebPubSub | Both | Yes | No | | **BatchNodeManagement** | Management traffic for deployments dedicated to Azure Batch. | Both | Yes | Yes | | **ChaosStudio** | Azure Chaos Studio. <br/><br/>**Note**: If you have enabled Application Insights integration on the Chaos Agent, the AzureMonitor tag is also required. | Both | Yes | No |
+| **CognitiveServicesFrontend** | The address ranges for traffic for Cognitive Services frontend portals. | Both | No | Yes |
| **CognitiveServicesManagement** | The address ranges for traffic for Azure Cognitive Services. | Both | No | No | | **DataFactory** | Azure Data Factory | Both | No | No | | **DataFactoryManagement** | Management traffic for Azure Data Factory. | Outbound | No | No |
web-application-firewall Waf Front Door Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-monitor.md
Front Door provides two types of logs: access logs and WAF logs.
::: zone pivot="front-door-standard-premium"
-The **FrontDoorAccessLog** includes all requests that go through Front Door. For more information on the Front Door access log, including the log schema, see [Azure Front Door logs](../../frontdoor/standard-premium/how-to-logs.md#access-log).
+The **FrontDoorAccessLog** includes all requests that go through Front Door. For more information on the Front Door access log, including the log schema, see [Monitor metrics and logs in Azure Front Door](../../frontdoor/front-door-diagnostics.md?pivot=front-door-standard-premium#access-log).
::: zone-end ::: zone pivot="front-door-classic"
-The **FrontdoorAccessLog** includes all requests that go through Front Door. For more information on the Front Door access log, including the log schema, see [Monitoring metrics and logs in Azure Front Door (classic)](../../frontdoor/front-door-diagnostics.md).
+The **FrontdoorAccessLog** includes all requests that go through Front Door. For more information on the Front Door access log, including the log schema, see [Monitoring metrics and logs in Azure Front Door (classic)](../../frontdoor/front-door-diagnostics.md?pivot=front-door-classic#diagnostic-logging).
::: zone-end