Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Use Scim To Provision Users And Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md | Use the general guidelines when implementing a SCIM endpoint to ensure compatibi ### /Schemas (Schema discovery): * [Sample request/response](#schema-discovery)-* Schema discovery isn't currently supported on the custom non-gallery SCIM application, but it's being used on certain gallery applications. Going forward, schema discovery will be used as the sole method to add more attributes to the schema of an existing gallery SCIM application. +* Schema discovery is being used on certain gallery applications. Schema discovery is the sole method to add more attributes to the schema of an existing gallery SCIM application. Schema discovery isn't currently supported on custom non-gallery SCIM application. * If a value isn't present, don't send null values. * Property values should be camel cased (for example, readWrite). * Must return a list response. The SCIM spec doesn't define a SCIM-specific scheme for authentication and autho |--|--|--|--| |Username and password (not recommended or supported by Azure AD)|Easy to implement|Insecure - [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984)|Not supported for new gallery or non-gallery apps.| |Long-lived bearer token|Long-lived tokens don't require a user to be present. They're easy for admins to use when setting up provisioning.|Long-lived tokens can be hard to share with an admin without using insecure methods such as email. |Supported for gallery and non-gallery apps. |-|OAuth authorization code grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid, and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.| -|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be automated, and new tokens can be silently requested without user interaction. ||Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth client credentials grant on non-gallery is in our backlog.| +|OAuth authorization code grant|Access tokens have a shorter life than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid, and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.| +|OAuth client credentials grant|Access tokens have a shorter life than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be automated, and new tokens can be silently requested without user interaction. ||Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth client credentials grant on non-gallery is in our backlog.| > [!NOTE] > It's not recommended to leave the token field blank in the Azure AD provisioning configuration custom app UI. The token generated is primarily available for testing purposes. |
active-directory | Concept Condition Filters For Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md | There are multiple scenarios that organizations can now enable using filter for Filter for devices is an option when creating a Conditional Access policy in the Azure portal or using the Microsoft Graph API. -> [!IMPORTANT] -> Device state and filter for devices cannot be used together in Conditional Access policy. - The following steps will help create two Conditional Access policies to support the first scenario under [Common scenarios](#common-scenarios). Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant. |
active-directory | Concept Conditional Access Grant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-grant.md | When user risk is detected, administrators can employ the user risk policy condi When a user is prompted to change a password, they'll first be required to complete multifactor authentication. Make sure all users have registered for multifactor authentication, so they're prepared in case risk is detected for their account. > [!WARNING]-> Users must have previously registered for self-service password reset before triggering the user risk policy. +> Users must have previously registered for multifactor authentication before triggering the user risk policy. The following restrictions apply when you configure a policy by using the password change control: |
active-directory | Concept Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policies.md | The software the user is employing to access the cloud app. For example, 'Browse The behavior of the client apps condition was updated in August 2020. If you have existing Conditional Access policies, they'll remain unchanged. However, if you select on an existing policy, the configure toggle has been removed and the client apps the policy applies to are selected. -#### Device state --This control is used to exclude devices that are hybrid Azure AD joined, or marked a compliant in Intune. This exclusion can be done to block unmanaged devices. - #### Filter for devices This control allows targeting specific devices based on their attributes in a policy. |
active-directory | Howto Conditional Access Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md | Sign-in frequency previously applied to only to the first factor authentication ### User sign-in frequency and device identities -On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](/active-directory/devices/concept-azure-ad-register), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](../develop/scenario-desktop-acquire-token-wam.md) plugin can refresh a PRT during native application authentication using WAM. +On Azure AD joined and hybrid Azure AD joined devices, unlocking the device, or signing in interactively will only refresh the Primary Refresh Token (PRT) every 4 hours. The last refresh timestamp recorded for PRT compared with the current timestamp must be within the time allotted in SIF policy for PRT to satisfy SIF and grant access to a PRT that has an existing MFA claim. On [Azure AD registered devices](/azure/active-directory/devices/concept-azure-ad-register), unlock/sign-in would not satisfy the SIF policy because the user is not accessing an Azure AD registered device via an Azure AD account. However, the [Azure AD WAM](../develop/scenario-desktop-acquire-token-wam.md) plugin can refresh a PRT during native application authentication using WAM. Note: The timestamp captured from user log-in is not necessarily the same as the last recorded timestamp of PRT refresh because of the 4-hour refresh cycle. The case when it is the same is when a PRT has expired and a user log-in refreshes it for 4 hours. In the following examples, assume SIF policy is set to 1 hour and PRT is refreshed at 00:00. We factor for five minutes of clock skew, so that we donΓÇÖt prompt users more o ## Next steps -* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md). +* If you're ready to configure Conditional Access policies for your environment, see the article [Plan a Conditional Access deployment](plan-conditional-access.md). |
active-directory | Resilience Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md | When resilience defaults are disabled, the Backup Authentication Service won't u ## Testing resilience defaults -It isn't possible to conduct a dry run using the Backup Authentication Service or simulate the result of a policy with resilience defaults enabled or disabled at this time. Azure AD will conduct monthly exercises using the Backup Authentication Service. The sign-in logs will display if the Backup Authentication Service was used to issue the access token. +It isn't possible to conduct a dry run using the Backup Authentication Service or simulate the result of a policy with resilience defaults enabled or disabled at this time. Azure AD will conduct monthly exercises using the Backup Authentication Service. The sign-in logs will display if the Backup Authentication Service was used to issue the access token. In **Azure portal** > **Monitoring** > **Sign-in Logs** blade, you can add the filter "Token issuer type == Azure AD Backup Auth" to display the logs processed by Azure AD Backup Authentication service. ## Configuring resilience defaults |
active-directory | Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md | These differences make workload identities harder to manage and put them at high > [!IMPORTANT] > Workload Identities Premium licenses are required to create or modify Conditional Access policies scoped to service principals. -> In directories without appropriate licenses, Conditional Access policies created prior to the release of Workload Identities Premium will be available for deletion only. +> In directories without appropriate licenses, existing Conditional Access policies for workload identities will continue to function, but can't be modified. For more information see [Microsoft Entra Workload Identities](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-identities#office-StandaloneSKU-k3hubfz).   > [!NOTE] > Policy can be applied to single tenant service principals that have been registered in your tenant. Third party SaaS and multi-tenanted apps are out of scope. Managed identities are not covered by policy. |
active-directory | Howto Vm Sign In Azure Ad Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md | Now that you've created the VM, you need to configure an Azure RBAC policy to de To allow a user to log in to the VM over RDP, you must assign the Virtual Machine Administrator Login or Virtual Machine User Login role to the resource group that contains the VM and its associated virtual network, network interface, public IP address, or load balancer resources. +> [!NOTE] +> Manually elevating a user to become a local administrator on the VM by adding the user to a member of the local administrators group or by running `net localgroup administrators /add "AzureAD\UserUpn"` command is not supported. You need to use Azure roles above to authorize VM login. + An Azure user who has the Owner or Contributor role assigned for a VM does not automatically have privileges to log in to the VM over RDP. The reason is to provide audited separation between the set of people who control virtual machines and the set of people who can access virtual machines. There are two ways to configure role assignments for a VM: |
active-directory | B2b Government National Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-government-national-clouds.md | -Microsoft Azure [national clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration isn't enabled by default across national cloud boundaries, but you can use Microsoft cloud settings (preview) to establish mutual B2B collaboration between the following Microsoft Azure clouds: +Microsoft Azure [national clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration isn't enabled by default across national cloud boundaries, but you can use Microsoft cloud settings to establish mutual B2B collaboration between the following Microsoft Azure clouds: - Microsoft Azure global cloud and Microsoft Azure Government - Microsoft Azure global cloud and Microsoft Azure China 21Vianet ## B2B collaboration across Microsoft clouds -To set up B2B collaboration between tenants in different clouds, both tenants need to configure their Microsoft cloud settings to enable collaboration with the other cloud. Then each tenant must configure inbound and outbound cross-tenant access with the tenant in the other cloud. For details, see [Microsoft cloud settings (preview)](cross-cloud-settings.md). +To set up B2B collaboration between tenants in different clouds, both tenants need to configure their Microsoft cloud settings to enable collaboration with the other cloud. Then each tenant must configure inbound and outbound cross-tenant access with the tenant in the other cloud. For details, see [Microsoft cloud settings](cross-cloud-settings.md). ## B2B collaboration within the Microsoft Azure Government cloud |
active-directory | Cross Cloud Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md | -# Configure Microsoft cloud settings for B2B collaboration (Preview) --> [!NOTE] -> Microsoft cloud settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +# Configure Microsoft cloud settings for B2B collaboration When Azure AD organizations in separate Microsoft Azure clouds need to collaborate, they can use Microsoft cloud settings to enable Azure AD B2B collaboration. B2B collaboration is available between the following global and sovereign Microsoft Azure clouds: In your Microsoft cloud settings, enable the Microsoft Azure cloud you want to c 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service. 1. Select **External Identities**, and then select **Cross-tenant access settings**.-1. Select **Microsoft cloud settings (Preview)**. +1. Select **Microsoft cloud settings**. 1. Select the checkboxes next to the external Microsoft Azure clouds you want to enable.  |
active-directory | Cross Tenant Access Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md | You can configure organization-specific settings by adding an organization and m ### Automatic redemption setting > [!IMPORTANT]-> Automatic redemption is currently in PREVIEW. +> Automatic redemption is currently in preview. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. [!INCLUDE [automatic-redemption-include](../includes/automatic-redemption-include.md)] For more information, see [Configure cross-tenant synchronization](../multi-tena ### Cross-tenant synchronization setting > [!IMPORTANT]-> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in PREVIEW. +> [Cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-overview.md) is currently in preview. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. [!INCLUDE [cross-tenant-synchronization-include](../includes/cross-tenant-synchronization-include.md)] To configure this setting using Microsoft Graph, see the [Update crossTenantIden ## Microsoft cloud settings -> [!NOTE] -> Microsoft cloud settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - Microsoft cloud settings let you collaborate with organizations from different Microsoft Azure clouds. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following clouds: - Microsoft Azure commercial cloud and Microsoft Azure Government To set up B2B collaboration, both organizations configure their Microsoft cloud > [!NOTE] > B2B direct connect is not supported for collaboration with Azure AD tenants in a different Microsoft cloud. -For configuration steps, see [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md). +For configuration steps, see [Configure Microsoft cloud settings for B2B collaboration](cross-cloud-settings.md). ### Default settings in cross-cloud scenarios To collaborate with a partner tenant in a different Microsoft Azure cloud, both Several tools are available to help you identify the access your users and partners need before you set inbound and outbound access settings. To ensure you donΓÇÖt remove access that your users and partners need, you should examine current sign-in behavior. Taking this preliminary step will help prevent loss of desired access for your end users and partner users. However, in some cases these logs are only retained for 30 days, so we strongly recommend you speak with your business stakeholders to ensure required access isn't lost. -> [!NOTE] -> During the preview of Microsoft cloud settings, sign-in events for cross-cloud scenarios will be reported in the resource tenant, but not in the home tenant. - ### Cross-tenant sign-in activity PowerShell script To review user sign-in activity associated with external tenants, use the [cross-tenant user sign-in activity](https://aka.ms/cross-tenant-signins-ps) PowerShell script. For example, to view all available sign-in events for inbound activity (external users accessing resources in the local tenant) and outbound activity (local users accessing resources in an external tenant), run the following command: |
active-directory | External Identities Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-identities-overview.md | For more information, see [Cross-tenant access in Azure AD External Identities]( Azure AD has a new feature for multi-tenant organizations called cross-tenant synchronization (preview), which allows for a seamless collaboration experience across Azure AD tenants. Cross-tenant synchronization settings are configured under the **Organization-specific access settings**. To learn more about multi-tenant organizations and cross-tenant synchronization see the [Multi-tenant organizations documentation](../multi-tenant-organizations/index.yml). -### Microsoft cloud settings for B2B collaboration (preview) +### Microsoft cloud settings for B2B collaboration Microsoft Azure cloud services are available in separate national clouds, which are physically isolated instances of Azure. Increasingly, organizations are finding the need to collaborate with organizations and users across global cloud and national cloud boundaries. With Microsoft cloud settings, you can establish mutual B2B collaboration between the following Microsoft Azure clouds: |
active-directory | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md | As of November 18, 2019, guest users in your directory (defined as user accounts Within the Azure US Government cloud, B2B collaboration is enabled between tenants that are both within Azure US Government cloud and that both support B2B collaboration. If you invite a user in a tenant that doesn't yet support B2B collaboration, you'll get an error. For details and limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2). -If you need to collaborate with an Azure AD organization that's outside of the Azure US Government cloud, you can use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to enable B2B collaboration. +If you need to collaborate with an Azure AD organization that's outside of the Azure US Government cloud, you can use [Microsoft cloud settings](cross-cloud-settings.md) to enable B2B collaboration. ## Invitation is blocked due to cross-tenant access policies |
active-directory | What Is B2b | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md | B2B collaboration is enabled by default, but comprehensive admin settings let yo - Use [external collaboration settings](external-collaboration-settings-configure.md) to define who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory. -- Use [Microsoft cloud settings (preview)](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](../../azure-government/index.yml) or [Microsoft Azure China 21Vianet](/azure/china).+- Use [Microsoft cloud settings](cross-cloud-settings.md) to establish mutual B2B collaboration between the Microsoft Azure global cloud and [Microsoft Azure Government](../../azure-government/index.yml) or [Microsoft Azure China 21Vianet](/azure/china). ## Easily invite guest users from the Azure AD portal |
active-directory | 10 Secure Local Guest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md | Title: Convert local guests into Azure AD B2B guest accounts -description: Learn how to convert local guests into Azure AD B2B guest accounts + Title: Convert local guest accounts to Azure AD B2B guest accounts +description: Learn to convert local guests into Azure AD B2B guest accounts by identifying apps and local guest accounts, migration, and more. Previously updated : 11/03/2022 Last updated : 02/22/2023 -# Convert local guests into Azure Active Directory B2B guest accounts +# Convert local guest accounts to Azure Active Directory B2B guest accounts -Azure Active Directory (Azure AD B2B) allows external users to collaborate using their own identities. However, it isn't uncommon for organizations to issue local usernames and passwords to external users. This approach isn't recommended as the bring-your-own-identity (BYOI) capabilities provided -by Azure AD B2B to provide better security, lower cost, and reduce -complexity when compared to local account creation. Learn more -[here.](./secure-external-access-resources.md) +With Azure Active Directory (Azure AD B2B), external users collaborate with their identities. Although organizations can issue local usernames and passwords to external users, this approach isn't recommended. Azure AD B2B has improved security, lower cost, and less complexity, compared to creating local accounts. In addition, if your organization issues local credentials that external users manage, you can use Azure AD B2B instead. Use the guidance in this document to make the transition. -If your organization currently issues local credentials that external users have to manage and would like to migrate to using Azure AD B2B instead, this document provides a guide to make the transition as seamlessly as possible. +Learn more: [Plan an Azure AD B2B collaboration deployment](secure-external-access-resources.md) ## Identify external-facing applications -Before migrating local accounts to Azure AD B2B, admins should understand what applications and workloads these external users need to access. For example, if external users need access to an application that is hosted on-premises, admins will need to validate that the application is integrated with Azure AD and that a provisioning process is implemented to provision the user from Azure AD to the application. -The existence and use of on-premises applications could be a reason why local accounts are created in the first place. Learn more about -[provisioning B2B guests to on-premises -applications.](../external-identities/hybrid-cloud-to-on-premises.md) +Before migrating local accounts to Azure AD B2B, confirm the applications and workloads external users can access. For example, for applications hosted on-premises, validate the application is integrated with Azure AD. On-premises applications are a good reason to create local accounts. -All external-facing applications should have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience. +Learn more: [Grant B2B users in Azure AD access to your on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md) ++We recommend that external-facing applications have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience. ## Identify local guest accounts -Admins will need to identify which accounts should be migrated to Azure AD B2B. External identities in Active Directory should be easily identifiable, which can be done with an attribute-value pair. For example, making ExtensionAttribute15 = `External` for all external users. If these users are being provisioned via Azure AD Connect or Cloud Sync, admins can optionally configure these synced external users -to have the `UserType` attributes set to `Guest`. If these users are being -provisioned as cloud-only accounts, admins can directly modify the -users' attributes. What is most important is being able to identify the -users who you want to convert to B2B. +Identify the accounts to be migrated to Azure AD B2B. External identities in Active Directory are identifiable with an attribute-value pair. For example, making ExtensionAttribute15 = `External` for external users. If these users are set up with Azure AD Connect or Cloud Sync, configure synced external users to have the `UserType` attributes set to `Guest`. If the users are set up as cloud-only accounts, you can modify user attributes. Primarily, identify users to convert to B2B. ## Map local guest accounts to external identities -Once you've identified which external user accounts you want to -convert to Azure AD B2B, you need to identify the BYOI identities or external emails for each user. For example, admins will need to identify that the local account (v-Jeff@Contoso.com) is a user whose home identity/email address is Jeff@Fabrikam.com. How to identify the home identities is up to the organization, but some examples include: --- Asking the external user's sponsor to provide the information.--- Asking the external user to provide the information.+Identify user identities or external emails. Confirm that the local account (v-lakshmi@contoso.com) is a user with the home identity and email address: lakshmi@fabrikam.com. To identify home identities: -- Referring to an internal database if this information is already known and stored by the organization.+- The external user's sponsor provides the information +- The external user provides the information +- Refer to an internal database, if the information is known and stored -Once the mapping of each external local account to the BYOI identity is done, admins will need to add the external identity/email to the user.mail attribute on each local account. +After mapping external local accounts to identities, add external identities or email to the user.mail attribute on local accounts. ## End user communications -External users should be notified that the migration will be taking place and when it will happen. Ensure you communicate the expectation that external users will stop using their existing password and post-migration will authenticate with their own home/corporate credentials going forward. Communications can include email campaigns, posters, and announcements. +Notify external users about migration timing. Communicate expectations, such as when external users must stop using a current password to enable authenticate by home and corporate credentials. Communications can include email campaigns and announcements. ## Migrate local guest accounts to Azure AD B2B -Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](../external-identities/invite-internal-users.md) -This can be done in the UX or programmatically via PowerShell or the Microsoft Graph API. Once complete, the users will no longer -authenticate with their local password, but will instead authenticate with their home identity/email that was populated in the user.mail attribute. You've successfully migrated to Azure AD B2B. +After local accounts have user.mail attributes populated with the external identity and email, convert local accounts to Azure AD B2B by inviting the local account. You can use PowerShell or the Microsoft Graph API. -## Post-migration considerations +Learn more: [Invite internal users to B2B collaboration](../external-identities/invite-internal-users.md) -If local accounts for external users were being synced from on-premises, admins should take steps to reduce their on-premises footprint and use cloud-native B2B guest accounts moving forward. Some possible actions can include: +## Post-migration considerations -- Transition existing local accounts for external users to Azure AD B2B and stop creating local accounts. Post-migration, admins should invite external users natively in Azure AD.+If external user local accounts were synced from on-premises, reduce their on-premises footprint and use B2B guest accounts. You can: -- Randomize the passwords of existing local accounts for external users to ensure they can't authenticate locally to on-premises resources. This will increase security by ensuring that authentication and user lifecycle is tied to the external user's home identity.+- Transition external user local accounts to Azure AD B2B and stop creating local accounts + - Invite external users in Azure AD +- Randomize external user's local-account passwords to prevent authentication to on-premises resources + - This action ensures authentication and user lifecycle is connected to the external user home identity ## Next steps See the following articles on securing external access to resources. We recommen 1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md) 1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) 1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)-1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here) +1. [Convert local guest accounts to B2B](10-secure-local-guest.md) (YouΓÇÖre here) |
active-directory | 7 Secure Access Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/7-secure-access-conditional-access.md | Title: Manage external access with Azure Active Directory Conditional Access -description: How to use Azure Active Directory Conditional Access policies to secure external access to resources. + Title: Manage external access to resources with Conditional Access +description: Learn to use Conditional Access policies to secure external access to resources. -# Manage external access with Conditional Access policies -[Conditional Access](../conditional-access/overview.md) is the tool Azure AD uses to bring together signals, enforce policies, and determine whether a user should be allowed access to resources. For detailed information on how to create and use Conditional Access policies (Conditional Access policies), see [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md). +# Manage external access to resources with Conditional Access policies - +Conditional Access interprets signals, enforces policies, and determines if a user is granted access to resources. In this article, learn about applying Conditional Access policies to external users. The article assumes you might not have access to entitlement management, a feature you can use with Conditional Access. -This article discusses applying Conditional Access policies to external users and assumes you donΓÇÖt have access to [Entitlement Management](../governance/entitlement-management-overview.md) functionality. Conditional Access policies can be and are used alongside Entitlement Management. +Learn more: -Earlier in this document set, you [created a security plan](3-secure-access-plan.md) that outlined: +* [What is Conditional Access?](../conditional-access/overview.md) +* [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) +* [What is entitlement management?](../governance/entitlement-management-overview.md) -* Applications and resources have the same security requirements and can be grouped for access. -* Sign-in requirements for external users. +The following diagram illustrates signals to Conditional Access that trigger access processes. -YouΓÇÖll use that plan to create your Conditional Access policies for external access. +  ++## Align a security plan with Conditional Access policies ++In the third article, in the set of 10 articles, there's guidance on creating a security plan. Use that plan to help create Conditional Access policies for external access. Part of the security plan includes: ++* Grouped applications and resources for simplified access +* Sign-in requirements for external users > [!IMPORTANT]-> Create several internal and external user test accounts so that you can test the policies you create before applying them. +> Create internal and external user test accounts to test policies before applying them. ++See article three, [Create a security plan for external access to resources](3-secure-access-plan.md) ## Conditional Access policies for external access -The following are best practices related to governing external access with Conditional Access policies. +The following sections are best practices for governing external access with Conditional Access policies. ++### Entitlement management or groups ++If you canΓÇÖt use connected organizations in entitlement management, create an Azure AD security group, or Microsoft 365 Group for partner organizations. Assign users from that partner to the group. You can use the groups in Conditional Access policies. ++Learn more: ++* [What is entitlement management?](../governance/entitlement-management-overview.md) +* [Manage Azure Active Directory groups and group membership](how-to-manage-groups.md) +* [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups?view=o365-worldwide&preserve-view=true) ++### Conditional Access policy creation ++Create as few Conditional Access policies as possible. For applications that have the same access requirements, add them to the same policy. ++Conditional Access policies apply to a maximum of 250 applications. If more than 250 applications have the same access requirement, create duplicate policies. For instance, Policy A applies to apps 1-250, Policy B applies to apps 251-500, etc. ++### Naming convention ++Use a naming convention that clarifies policy purpose. External access examples are: ++* ExternalAccess_actiontaken_AppGroup +* ExternalAccess_Block_FinanceApps -* If you canΓÇÖt use connected organizations in Entitlement Management, create an Azure AD security group or Microsoft 365 group for each partner organization you work with. Assign all users from that partner to the group. You may then use those groups in Conditional Access policies. +## Block external users from resources -* Create as few Conditional Access policies as possible. For applications that have the same access needs, add them all to the same policy. +You can block external users from accessing resources with Conditional Access policies. - > [!NOTE] - > Conditional Access policies can apply to a maximum of 250 applications. If more than 250 Apps have the same access needs, create duplicate policies. Policy A will apply to apps 1-250, policy B will apply to apps 251-500, etc. +1. Sign in to the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator. +2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +3. Select **New policy**. +4. Enter a policy a name. +5. Under **Assignments**, select **Users or workload identities**. +6. Under **Include**, select **All guests and external users**. +7. Under **Exclude**, select **Users and groups**. +8. Select emergency access accounts. +9. Select **Done**. +10. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. +11. Under **Exclude**, select applications you want to exclude. +12. Under **Access controls** > **Grant**, select **Block access**. +13. Select **Select**. +14. Select **Enable policy** to **Report-only**. +15. Select **Create**. -* Clearly name policies specific to external access with a naming convention. One naming convention is *ExternalAccess_actiontaken_AppGroup*. For example a policy for external access that blocks access to finance apps, called ExternalAccess_Block_FinanceApps. +> [!NOTE] +> You can confirm settings in **report only** mode. See, Configure a Conditional Access policy in repory-only mode, in [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md). -## Block all external users from resources +Learn more: [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md) -You can block external users from accessing specific sets of resources with Conditional Access policies. Once youΓÇÖve determined the set of resources to which you want to block access, create a policy. +### Allow external access to specific external users -To create a policy that blocks access for external users to a set of applications: +There are scenarios when it's necessary to allow access for a small, specific group. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. -1. Select **New policy**. -1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_FinanceApps. -1. Under **Assignments**, select **Users or workload identities**. - 1. Under **Include**, select **All guests and external users**. - 1. Under **Exclude**, select **Users and groups** and choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md). - 1. Select **Done**. -1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. - 1. Under **Exclude**, select any applications that shouldnΓÇÖt be blocked. -1. Under **Access controls** > **Grant**, select **Block access**, and choose **Select**. -1. Confirm your settings and set **Enable policy** to **Report-only**. -1. Select **Create** to create to enable your policy. +Before you begin, we recommend you create a security group, which contains external users who access resources. See, [Quickstart: Create a group with members and view all groups and members in Azure AD](active-directory-groups-view-azure-portal.md). -After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**. +1. Sign in to the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator. +2. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +3. Select **New policy**. +4. Enter a policy name. +5. Under **Assignments**, select **Users or workload identities**. +6. Under **Include**, select **All guests and external users**. +7. Under **Exclude**, select **Users and groups** +8. Select emergency access accounts. +9. Select the external users security group. +10. Select **Done**. +11. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. +12. Under **Exclude**, select applications you want to exclude. +13. Under **Access controls** > **Grant**, select **Block access**. +14. Select **Select**. +15. Select **Create**. -### Block external access to all except specific external users +> [!NOTE] +> You can confirm settings in **report only** mode. See, Configure a Conditional Access policy in repory-only mode, in [Conditional Access insights and reporting](../conditional-access/howto-conditional-access-insights-reporting.md). -There may be times you want to block external users except a specific group. For example, you may want to block all external users except those working for the finance team from the finance applications. To do this [Create a security group](active-directory-groups-create-azure-portal.md) to contain the external users who should access the finance applications: +Learn more: [Manage emergency access accounts in Azure AD](../roles/security-emergency-access.md) -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. -1. Select **New policy**. -1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies, for example ExternalAccess_Block_AllButFinance. -1. Under **Assignments**, select **Users or workload identities**. - 1. Under **Include**, select **All guests and external users**. - 1. Under **Exclude**, select **Users and groups**, - 1. Choose your organization's [emergency access or break-glass accounts](../roles/security-emergency-access.md). - 1. Choose the security group of external users you want to exclude from being blocked from specific applications. - 1. Select **Done**. -1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. - 1. Under **Exclude**, select the finance applications that shouldnΓÇÖt be blocked. -1. Under **Access controls** > **Grant**, select **Block access**, and choose **Select**. -1. Confirm your settings and set **Enable policy** to **Report-only**. -1. Select **Create** to create to enable your policy. +### Service provider access -After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**. +Conditional Access policies for external users might interfere with service provider access, for example granular delegated administrate privileges. -### External partner access +Learn more: [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction) -Conditional Access policies that target external users may interfere with service provider access, for example granular delegated admin privileges [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction). +## Conditional Access templates -## Implement Conditional Access +Conditional Access templates are a convenient method to deploy new policies aligned with Microsoft recommendations. These templates provide protection aligned with commonly used policies across various customer types and locations. -Many common Conditional Access policies are documented. See the article [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md) for other common policies you may want to adapt for external users. +Learn more: [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md) ## Next steps |
active-directory | Concept Workload Identity Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md | These differences make workload identities harder to manage and put them at high > [!IMPORTANT] > Detections are visible only to [Workload Identities Premium](https://www.microsoft.com/security/business/identity-access/microsoft-entra-workload-identities#office-StandaloneSKU-k3hubfz) customers. Customers without Workload Identities Premium licenses still receive all detections but the reporting of details is limited. +> [!NOTE] +> Identity Protection detects risk on single tenant, third party SaaS, and multi-tenant apps. Managed Identities are not currently in scope. + ## Prerequisites To make use of workload identity risk, including the new **Risky workload identities** blade and the **Workload identity detections** tab in the **Risk detections** blade in the portal, you must have the following. |
active-directory | Howto Identity Protection Configure Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-notifications.md | Azure AD Identity Protection sends two types of automated notification emails to This article provides you with an overview of both notification emails. -We don't support sending emails to users in group-assigned roles. + > [!Note] + > **We don't support sending emails to users in group-assigned roles.** ## Users at risk detected email |
active-directory | Howto Identity Protection Configure Risk Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md | Configured trusted [network locations](../conditional-access/location-condition. ### Risk remediation -Organizations can choose to block access when risk is detected. Blocking sometimes stops legitimate users from doing what they need to. A better solution is to allow self-remediation using Azure AD multifactor authentication (MFA) and secure self-service password reset (SSPR). +Organizations can choose to block access when risk is detected. Blocking sometimes stops legitimate users from doing what they need to. A better solution is to allow self-remediation using Azure AD multifactor authentication (MFA) and secure password change. > [!WARNING]-> Users must register for Azure AD MFA and SSPR before they face a situation requiring remediation. Users not registered are blocked and require administrator intervention. +> Users must register for Azure AD MFA before they face a situation requiring remediation. For hybrid users that are synced from on-premises to cloud, password writeback must have been enabled on them. Users not registered are blocked and require administrator intervention. > -> Password change (I know my password and want to change it to something new) outside of the risky user policy remediation flow does not meet the requirement for secure password reset. +> Password change (I know my password and want to change it to something new) outside of the risky user policy remediation flow does not meet the requirement for secure password change. ### Microsoft's recommendation Microsoft recommends the below risk policy configurations to protect your organization: - User risk policy- - Require a secure password reset when user risk level is **High**. Azure AD MFA is required before the user can create a new password with SSPR to remediate their risk. + - Require a secure password change when user risk level is **High**. Azure AD MFA is required before the user can create a new password with password writeback to remediate their risk. - Sign-in risk policy - Require Azure AD MFA when sign-in risk level is **Medium** or **High**, allowing users to prove it's them by using one of their registered authentication methods, remediating the sign-in risk. -Requiring access control when risk level is low will introduce more user interrupts. Choosing to block access rather than allowing self-remediation options, like secure password reset and multifactor authentication, will impact your users and administrators. Weigh these choices when configuring your policies. +Requiring access control when risk level is low will introduce more user interrupts. Choosing to block access rather than allowing self-remediation options, like secure password change and multifactor authentication, will impact your users and administrators. Weigh these choices when configuring your policies. ## Exclusions |
active-directory | Application Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-properties.md | The homepage URL can't be edited within enterprise applications. The homepage UR This is the application logo that users see on the My Apps portal and the Office 365 application launcher. Administrators also see the logo in the Azure AD gallery. -Custom logos must be exactly 215x215 pixels in size and be in the PNG format. You should use a solid color background with no transparency in your application logo. The central image dimensions should be 94x94 pixels and the logo file size can't be over 100 KB. +Custom logos must be exactly 215x215 pixels in size and be in the PNG format. You should use a solid color background with no transparency in your application logo. The logo file size can't be over 100 KB. ## Application ID |
active-directory | Configure Admin Consent Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md | To enable the admin consent workflow and choose reviewers: 1. Search for and select **Azure Active Directory**. 1. Select **Enterprise applications**. 1. Under **Security**, select **Consent and permissions**.-1. Under **Manage**, select **Admin consent settings**. -Under **Admin consent requests**, select **Yes** for **Users can request admin consent to apps they are unable to consent to** . +1. Under **Manage**, select **Admin consent settings**. Under **Admin consent requests**, select **Yes** for **Users can request admin consent to apps they are unable to consent to** .  |
active-directory | Grant Admin Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md | When granting tenant-wide admin consent using either method described above, a w The tenant-wide admin consent URL follows the following format: ```http-https://login.microsoftonline.com/{tenant-id}/adminconsent?client_id={client-id} +https://login.microsoftonline.com/{organization}/adminconsent?client_id={client-id} ``` where: - `{client-id}` is the application's client ID (also known as app ID).-- `{tenant-id}` is your organization's tenant ID or any verified domain name.+- `{organization}` is the tenant ID or any verified domain name of the tenant you want to consent the application in. You can use the value `common`, which will cause the consent to happen in the home tenant of the user you sign in with. As always, carefully review the permissions an application requests before granting consent. |
active-directory | Manage Application Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md | $spOAuth2PermissionsGrants | ForEach-Object { } # Get all application permissions for the service principal-$spApplicationPermissions = Get-AzureADServicePrincipalAppRoleAssignedTo -ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" } +$spApplicationPermissions = Get-AzureADServiceAppRoleAssignedTo-ObjectId $sp.ObjectId -All $true | Where-Object { $_.PrincipalType -eq "ServicePrincipal" } # Remove all application permissions $spApplicationPermissions | ForEach-Object { |
active-directory | Atlassian Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi ## Step 2. Configure Atlassian Cloud to support provisioning with Azure AD 1. Navigate to [Atlassian Admin Console](http://admin.atlassian.com/). Select your organization if you have more than one.-1. Select **Settings > User provisioning**. -  -1. Select **Create a directory**. -1. Enter a name to identify the user directory, for example Azure AD users, then select **Create**. -  -1. Copy the values for **Directory base URL** and **API key**. You'll need those for your identity provider configuration later. -+1. Select **Security > Identity providers**. +1. Select your Identity provider directory. +1. Select **Set up user provisioning**. +1. Copy the values for **SCIM base URL** and **API key**. You'll need them when you configure Azure. +1. Save your **SCIM configuration**. > [!NOTE] > Make sure you store these values in a safe place, as we won't show them to you again.--  -+ Users and groups will automatically be provisioned to your organization. See the [user provisioning](https://support.atlassian.com/provisioning-users/docs/understand-user-provisioning) page for more details on how your users and groups sync to your organization. ## Step 3. Add Atlassian Cloud from the Azure AD application gallery |
active-directory | Atmos Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atmos-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi ## Step 2. Configure Atmos to support provisioning with Azure AD -1. Log in to the [Management Console](https://auth.axissecurity.com/). +1. Log in to the Axis Management Console. 1. Navigate to **Settings**-> **Identity Providers** screen. 1. Hover over the **Azure Identity Provider** and select **edit**. 1. Navigate to **Advanced Settings**. The scenario outlined in this tutorial assumes that you already have the followi ## Step 3. Add Atmos from the Azure AD application gallery -Add Atmos from the Azure AD application gallery to start managing provisioning to Atmos. If you have previously setup Atmos for SSO, you can use the same application. However the recommendation is to create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). +Add Atmos from the Azure AD application gallery to start managing provisioning to Atmos. If you have previously setup Atmos for SSO, you can use the same application. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ## Step 4. Define who will be in scope for provisioning |
active-directory | Hawkeyebsb Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hawkeyebsb-tutorial.md | + + Title: Azure Active Directory SSO integration with HawkeyeBSB +description: Learn how to configure single sign-on between Azure Active Directory and HawkeyeBSB. ++++++++ Last updated : 02/23/2023+++++# Azure Active Directory SSO integration with HawkeyeBSB ++In this article, you'll learn how to integrate HawkeyeBSB with Azure Active Directory (Azure AD). HawkeyeBSB was developed by Redbridge Debt & Treasury Advisory to help Clients manage their bank fees. When you integrate HawkeyeBSB with Azure AD, you can: ++* Control in Azure AD who has access to HawkeyeBSB. +* Enable your users to be automatically signed-in to HawkeyeBSB with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for HawkeyeBSB in a test environment. HawkeyeBSB supports both **SP** and **IDP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with HawkeyeBSB, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* HawkeyeBSB single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the HawkeyeBSB application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add HawkeyeBSB from the Azure AD gallery ++Add HawkeyeBSB from the Azure AD application gallery to configure single sign-on with HawkeyeBSB. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **HawkeyeBSB** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: + `https://hawkeye.redbridgeanalytics.com/sso/saml/metadata/<uniqueSlugPerCustomer>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://hawkeye.redbridgeanalytics.com/sso/saml/acs/<uniqueSlugPerCustomer>` ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://hawkeye.redbridgeanalytics.com/sso/saml/login/<uniqueSlugPerCustomer>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [HawkeyeBSB Client support team](mailto:casemanagement@redbridgedta.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up HawkeyeBSB** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure HawkeyeBSB SSO ++To configure single sign-on on **HawkeyeBSB** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [HawkeyeBSB support team](mailto:casemanagement@redbridgedta.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create HawkeyeBSB test user ++In this section, you create a user called Britta Simon at HawkeyeBSB. Work with [HawkeyeBSB support team](mailto:casemanagement@redbridgedta.com) to add the users in the HawkeyeBSB platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++1. Click on **Test this application** in Azure portal. This will redirect to HawkeyeBSB Sign-on URL where you can initiate the login flow. ++1. Go to HawkeyeBSB Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++1. Click on **Test this application** in Azure portal and you should be automatically signed in to the HawkeyeBSB for which you set up the SSO. ++1. You can also use Microsoft My Apps to test the application in any mode. When you click the HawkeyeBSB tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the HawkeyeBSB for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure HawkeyeBSB you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Introdus Pre And Onboarding Platform Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/introdus-pre-and-onboarding-platform-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).-* An introdus subscription, that includes Single Sign-On (SSO) -* A valid introdus API Token. A guide on how to generate Token, can be found [here](https://api.introdus.dk/docs/#api-OpenAPI). +* An introdus subscription, that includes single sign-on (SSO) +* A valid introdus API Token. ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). A subscription that allows SSO. No other configuration is necessary on introdus ## Step 3. Add introDus Pre and Onboarding Platform from the Azure AD application gallery -Add introDus Pre and Onboarding Platform from the Azure AD application gallery to start managing provisioning to introDus Pre and Onboarding Platform. If you have previously setup introDus Pre and Onboarding Platform for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). +Add introDus Pre and Onboarding Platform from the Azure AD application gallery to start managing provisioning to introDus Pre and Onboarding Platform. If you have previously setup introDus Pre and Onboarding Platform for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ## Step 4. Define who will be in scope for provisioning This section guides you through the steps to configure the Azure AD provisioning 8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to introDus Pre and Onboarding Platform**. -9. Review the user attributes that are synchronized from Azure AD to introDus Pre and Onboarding Platform in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in introDus Pre and Onboarding Platform for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the introDus Pre and Onboarding Platform API supports filtering users based on that attribute. Select the **Save** button to commit any changes. +9. Review the user attributes that are synchronized from Azure AD to introDus Pre and Onboarding Platform in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in introDus Pre and Onboarding Platform for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the introDus Pre and Onboarding Platform API supports filtering users based on that attribute. Select the **Save** button to commit any changes. |Attribute|Type|Supported for filtering| |||| This section guides you through the steps to configure the Azure AD provisioning  -13. When you are ready to provision, click **Save**. +13. When you're ready to provision, click **Save**.  |
active-directory | Parallels Desktop Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/parallels-desktop-tutorial.md | + + Title: Azure Active Directory SSO integration with Parallels Desktop +description: Learn how to configure single sign-on between Azure Active Directory and Parallels Desktop. ++++++++ Last updated : 02/23/2023+++++# Azure Active Directory SSO integration with Parallels Desktop ++In this article, you'll learn how to integrate Parallels Desktop with Azure Active Directory (Azure AD). SSO/SAML authentication for employees to use Parallels Desktop. Enable your employees to sign in and activate Parallels Desktop with a corporate account. When you integrate Parallels Desktop with Azure AD, you can: ++* Control in Azure AD who has access to Parallels Desktop. +* Enable your users to be automatically signed-in to Parallels Desktop with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Parallels Desktop in a test environment. Parallels Desktop supports only **SP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with Parallels Desktop, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Parallels Desktop single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Parallels Desktop application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Parallels Desktop from the Azure AD gallery ++Add Parallels Desktop from the Azure AD application gallery to configure single sign-on with Parallels Desktop. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Parallels Desktop** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: + `https://account.parallels.com/<ID>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://account.parallels.com/webapp/sso/acs/<ID>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier and Reply URL. Please note the Identifier and Reply URL values are customer specific and should be able to specify it manually by copying it from Parallels My Account to the identity provider Azure. Contact [Parallels Desktop Client support team](mailto:parallels.desktop.sso@alludo.com) for any help. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++ c. In the **Sign on URL** textbox, type the URL:- + `https://my.parallels.com/login?sso=1` ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (PEM)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Parallels Desktop** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Parallels Desktop SSO ++To configure single sign-on on **Parallels Desktop** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Parallels Desktop support team](mailto:parallels.desktop.sso@alludo.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Parallels Desktop test user ++In this section, you create a user called Britta Simon at Parallels Desktop. Work with [Parallels Desktop support team](mailto:parallels.desktop.sso@alludo.com) to add the users in the Parallels Desktop platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Parallels Desktop Sign-on URL where you can initiate the login flow. ++* Go to Parallels Desktop Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the Parallels Desktop tile in the My Apps, this will redirect to Parallels Desktop Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Parallels Desktop you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
aks | Aks Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-diagnostics.md | Title: Azure Kubernetes Service (AKS) Diagnostics Overview description: Learn about self-diagnosing clusters in Azure Kubernetes Service.- Last updated 11/15/2022 |
aks | Aks Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md | Title: Migrate to Azure Kubernetes Service (AKS) description: Migrate to Azure Kubernetes Service (AKS).- Last updated 03/25/2021 |
aks | Aks Planned Maintenance Weekly Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-planned-maintenance-weekly-releases.md | Title: Use Planned Maintenance for your Azure Kubernetes Service (AKS) cluster weekly releases (preview) description: Learn how to use Planned Maintenance in Azure Kubernetes Service (AKS) for cluster weekly releases- Last updated 09/16/2021 |
aks | Aks Support Help | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-support-help.md | Title: Azure Kubernetes Service support and help options description: How to obtain help and support for questions or problems when you create solutions using Azure Kubernetes Service. - Last updated 10/18/2022 |
aks | Api Server Authorized Ip Ranges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-authorized-ip-ranges.md | Title: API server authorized IP ranges in Azure Kubernetes Service (AKS) description: Learn how to secure your cluster using an IP address range for access to the API server in Azure Kubernetes Service (AKS)- Last updated 11/04/2022 |
aks | Api Server Vnet Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md | Title: API Server VNet Integration in Azure Kubernetes Service (AKS) description: Learn how to create an Azure Kubernetes Service (AKS) cluster with API Server VNet Integration - Last updated 09/09/2022 |
aks | Auto Upgrade Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md | Title: Automatically upgrade an Azure Kubernetes Service (AKS) cluster description: Learn how to automatically upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.- |
aks | Auto Upgrade Node Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md | Title: Automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images description: Learn how to automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images.- Last updated 02/03/2023 -# Automatically upgrade Azure Kubernetes Service cluster node operating system images +# Automatically upgrade Azure Kubernetes Service cluster node operating system images (preview) AKS supports upgrading the images on a node so your cluster is up to date with the newest operating system (OS) and runtime updates. AKS regularly provides new node OS images with the latest updates, so it's beneficial to upgrade your node's images regularly for the latest AKS features and to maintain security. Before learning about auto-upgrade, make sure you understand upgrade fundamentals by reading [Upgrade an AKS cluster][upgrade-aks-cluster]. The latest AKS node image information can be found by visiting the [AKS release tracker][release-tracker]. + ## Why use node OS auto-upgrade Node OS auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS. The following upgrade channels are available: | `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates|N/A| | `Unmanaged`|OS updates will be applied automatically through the OS built-in patching infrastructure. Newly allocated machines will be unpatched initially and will be patched at some point by the OS's infrastructure|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows and Mariner don't apply security patches automatically, so this option behaves equivalently to `None`| | `SecurityPatch`|AKS will update the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only" on a regular basis. Where possible, patches will also be applied without disruption to existing nodes. Some patches, such as kernel patches, can't be applied to existing nodes without disruption. For such patches, the VHD will be updated and existing machines will be upgraded to that VHD following maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group.|N/A|-| `NodeImage`|AKS will update the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades] will be disabled by default.| +| `NodeImage`|AKS will update the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] will be disabled by default.| To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example. |
aks | Availability Zones | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md | Title: Use availability zones in Azure Kubernetes Service (AKS) description: Learn how to create a cluster that distributes nodes across availability zones in Azure Kubernetes Service (AKS)- Last updated 02/22/2023 |
aks | Azure Ad Integration Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md | Title: Integrate Azure Active Directory with Azure Kubernetes Service (legacy) description: Learn how to use the Azure CLI to create and Azure Active Directory-enabled Azure Kubernetes Service (AKS) cluster (legacy)- Last updated 11/11/2021 |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. - |
aks | Azure Cni Powered By Cilium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md | Title: Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) ( description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Azure CNI Powered by Cilium. - |
aks | Azure Disk Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md | Title: Use a customer-managed key to encrypt Azure disks in Azure Kubernetes Service (AKS) description: Bring your own keys (BYOK) to encrypt AKS OS and Data disks.- Last updated 07/18/2022 |
aks | Azure Hpc Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hpc-cache.md | Title: Integrate Azure HPC Cache with Azure Kubernetes Service description: Learn how to integrate HPC Cache with Azure Kubernetes Service- |
aks | Azure Nfs Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-nfs-volume.md | Title: Manually create a Linux NFS Server persistent volume for Azure Kubernetes Service description: Learn how to manually create an Ubuntu Linux NFS Server persistent volume for use with pods in Azure Kubernetes Service (AKS)- Last updated 06/13/2022 |
aks | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices.md | Title: Best practices for Azure Kubernetes Service (AKS) description: Collection of the cluster operator and developer best practices to build and manage applications in Azure Kubernetes Service (AKS)- Last updated 03/09/2021 |
aks | Certificate Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md | Title: Certificate Rotation in Azure Kubernetes Service (AKS) description: Learn certificate rotation in an Azure Kubernetes Service (AKS) cluster.- Last updated 01/19/2023 |
aks | Cis Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-kubernetes.md | Title: Center for Internet Security (CIS) Kubernetes benchmark description: Learn how AKS applies the CIS Kubernetes benchmark- Last updated 12/20/2022 |
aks | Cis Ubuntu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-ubuntu.md | Title: Azure Kubernetes Service (AKS) Ubuntu image alignment with Center for Internet Security (CIS) benchmark description: Learn how AKS applies the CIS benchmark- Last updated 04/20/2022 |
aks | Cluster Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md | Title: Use the cluster autoscaler in Azure Kubernetes Service (AKS) description: Learn how to use the cluster autoscaler to automatically scale your cluster to meet application demands in an Azure Kubernetes Service (AKS) cluster.- Last updated 10/03/2022 |
aks | Cluster Container Registry Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md | Title: Integrate Azure Container Registry with Azure Kubernetes Service description: Learn how to integrate Azure Kubernetes Service (AKS) with Azure Container Registry (ACR)- Last updated 11/16/2022 ms.tool: azure-cli, azure-powershell |
aks | Cluster Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-extensions.md | Title: Cluster extensions for Azure Kubernetes Service (AKS) description: Learn how to deploy and manage the lifecycle of extensions on Azure Kubernetes Service (AKS)- Last updated 09/29/2022 |
aks | Command Invoke | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/command-invoke.md | Title: Use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster description: Learn how to use `command invoke` to access a private Azure Kubernetes Service (AKS) cluster- Last updated 1/14/2022 |
aks | Concepts Clusters Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md | Title: Concepts - Kubernetes basics for Azure Kubernetes Services (AKS) description: Learn the basic cluster and workload components of Kubernetes and how they relate to features in Azure Kubernetes Service (AKS)- Last updated 10/31/2022 |
aks | Concepts Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-identity.md | Title: Concepts - Access and identity in Azure Kubernetes Services (AKS) description: Learn about access and identity in Azure Kubernetes Service (AKS), including Azure Active Directory integration, Kubernetes role-based access control (Kubernetes RBAC), and roles and bindings.- Last updated 09/27/2022 |
aks | Concepts Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-scale.md | Title: Concepts - Scale applications in Azure Kubernetes Services (AKS) description: Learn about scaling in Azure Kubernetes Service (AKS), including horizontal pod autoscaler, cluster autoscaler, and the Azure Container Instances connector.- Last updated 02/28/2019 |
aks | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md | Title: Concepts - Security in Azure Kubernetes Services (AKS) description: Learn about security in Azure Kubernetes Service (AKS), including master and node communication, network policies, and Kubernetes secrets.- Last updated 02/22/2023 |
aks | Concepts Sustainable Software Engineering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md | Title: Concepts - Sustainable software engineering in Azure Kubernetes Services (AKS) description: Learn about sustainable software engineering in Azure Kubernetes Service (AKS).- Last updated 10/25/2022 |
aks | Configure Azure Cni Dynamic Ip Allocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md | Title: Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS) description: Learn how to configure Azure CNI (advanced) networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)- Last updated 01/09/2023 |
aks | Configure Azure Cni | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md | |
aks | Configure Kube Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md | Title: Configure kube-proxy (iptables/IPVS) (preview) description: Learn how to configure kube-proxy to utilize different load balancing configurations with Azure Kubernetes Service (AKS).- Last updated 10/25/2022 |
aks | Configure Kubenet Dual Stack | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md | |
aks | Configure Kubenet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md | |
aks | Control Kubeconfig Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/control-kubeconfig-access.md | Title: Limit access to kubeconfig in Azure Kubernetes Service (AKS) description: Learn how to control access to the Kubernetes configuration file (kubeconfig) for cluster administrators and cluster users- Last updated 05/06/2020 |
aks | Coredns Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md | Title: Customize CoreDNS for Azure Kubernetes Service (AKS) description: Learn how to customize CoreDNS to add subdomains or extend custom DNS endpoints using Azure Kubernetes Service (AKS)- |
aks | Csi Secrets Store Driver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md | Title: Use the Azure Key Vault Provider for Secrets Store CSI Driver for Azure K description: Learn how to use the Azure Key Vault Provider for Secrets Store CSI Driver to integrate secrets stores with Azure Kubernetes Service (AKS). - Last updated 02/10/2023 |
aks | Csi Secrets Store Identity Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md | Title: Provide an access identity to the Azure Key Vault Provider for Secrets St description: Learn about the various methods that you can use to allow the Azure Key Vault Provider for Secrets Store CSI Driver to integrate with your Azure key vault. - Last updated 01/31/2023 |
aks | Csi Secrets Store Nginx Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-nginx-tls.md | Title: Set up Secrets Store CSI Driver to enable NGINX Ingress Controller with T description: How to configure Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS for Azure Kubernetes Service (AKS). - Last updated 05/26/2022 |
aks | Custom Certificate Authority | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-certificate-authority.md | Title: Custom certificate authority (CA) in Azure Kubernetes Service (AKS) (preview) description: Learn how to use a custom certificate authority (CA) in an Azure Kubernetes Service (AKS) cluster.- |
aks | Custom Node Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md | Title: Customize the node configuration for Azure Kubernetes Service (AKS) node pools description: Learn how to customize the configuration on Azure Kubernetes Service (AKS) cluster nodes and node pools.- Last updated 12/03/2020 |
aks | Internal Lb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md | internal-app LoadBalancer 10.1.15.188 10.0.0.35 80:31669/TCP 1m > [!NOTE] > -> You may need to give the *Network Contributor* role to the resource group in which your Azure virtual network resources are deployed. You can view the cluster identity with [az aks show][az-aks-show], such as `az aks show --resource-group myResourceGroup --name myAKSCluster --query "identity"`. To create a role assignment, use the [az role assignment create][az-role-assignment-create] command. +> You may need to assign a minimum of *Microsoft.Network/virtualNetworks/subnets/read* and *Microsoft.Network/virtualNetworks/subnets/join/action* permission to AKS MSI on the Azure Virtual Network resources. You can view the cluster identity with [az aks show][az-aks-show], such as `az aks show --resource-group myResourceGroup --name myAKSCluster --query "identity"`. To create a role assignment, use the [az role assignment create][az-role-assignment-create] command. -## Specify a different subnet +### Specify a different subnet Add the *azure-load-balancer-internal-subnet* annotation to your service to specify a subnet for your load balancer. The subnet specified must be in the same virtual network as your AKS cluster. When deployed, the load balancer *EXTERNAL-IP* address is part of the specified subnet. |
aks | Node Image Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md | For more information about the latest images provided by AKS, see the [AKS relea For information on upgrading the Kubernetes version for your cluster, see [Upgrade an AKS cluster][upgrade-cluster]. +Node image upgrades can also be performed automatically, and scheduled by using planned maintenance. For more details, see [Automatically upgrade node images][auto-upgrade-node-image]. + > [!NOTE] > The AKS cluster must use virtual machine scale sets for the nodes. az aks nodepool show \ [max-surge]: upgrade-cluster.md#customize-node-surge-upgrade [az-extension-add]: /cli/azure/extension#az_extension_add [az-extension-update]: /cli/azure/extension#az_extension_update+[auto-upgrade-node-image]: auto-upgrade-node-image.md |
aks | Node Upgrade Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md | This process is better than updating Linux-based kernels manually because Linux This article shows you how you can automate the update process of AKS nodes. You'll use GitHub Actions and Azure CLI to create an update task based on `cron` that runs automatically. +Node image upgrades can also be performed automatically, and scheduled by using planned maintenance. For more details, see [Automatically upgrade node images][auto-upgrade-node-image]. + ## Before you begin This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal]. jobs: [system-pools]: use-system-pools.md [spot-pools]: spot-node-pool.md [use-multiple-node-pools]: use-multiple-node-pools.md+[auto-upgrade-node-image]: auto-upgrade-node-image.md |
aks | Update Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md | To check the expiration date of your service principal, use the [az ad sp creden ```azurecli SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \ --query servicePrincipalProfile.clientId -o tsv)-az ad sp credential list --id "$SP_ID" --query "[].endDateTime" -o tsv +az ad app credential list --id "$SP_ID" --query "[].endDateTime" -o tsv ``` ### Reset the existing service principal credential SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \ With a variable set that contains the service principal ID, now reset the credentials using [az ad sp credential reset][az-ad-sp-credential-reset]. The following example lets the Azure platform generate a new secure secret for the service principal. This new secure secret is also stored as a variable. ```azurecli-interactive-SP_SECRET=$(az ad sp credential reset --id "$SP_ID" --query password -o tsv) +SP_SECRET=$(az ad app credential reset --id "$SP_ID" --query password -o tsv) ``` Now continue on to [update AKS cluster with new service principal credentials](#update-aks-cluster-with-new-service-principal-credentials). This step is necessary for the Service Principal changes to reflect on the AKS cluster. |
aks | Use Mariner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-mariner.md | Mariner is available for use in the same regions as AKS. Mariner currently has the following limitations: -* Mariner doesn't yet have image SKUs for GPU, ARM64, SGX, or FIPS. -* Mariner doesn't yet have FedRAMP, FIPS, or CIS certification. +* Image SKUs for SGX and FIPS are not available. +* It doesn't meet the [Federal Information Processing Standard (FIPS) 140](https://csrc.nist.gov/publications/detail/fips/140/3/final) compliance requirements and [Center for Internet Security (CIS)](https://www.cisecurity.org/) certification. * Mariner can't yet be deployed through the Azure portal. * Qualys, Trivy, and Microsoft Defender for Containers are the only vulnerability scanning tools that support Mariner today.-* The Mariner container host is a Gen 2 image. Mariner doesn't plan to offer a Gen 1 SKU. -* Node configurations aren't yet supported. -* Mariner isn't yet supported in GitHub actions. * Mariner doesn't support AppArmor. Support for SELinux can be manually configured. * Some addons, extensions, and open-source integrations may not be supported yet on Mariner. Azure Monitor, Grafana, Helm, Key Vault, and Container Insights are supported. |
analysis-services | Analysis Services Addservprinc Admins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-addservprinc-admins.md | Title: Learn how to add a service principal to Azure Analysis Services admin role | Microsoft Docs description: Learn how to add an automation service principal to the Azure Analysis Services server admin role -+ Last updated 01/24/2023 |
analysis-services | Analysis Services Async Refresh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-async-refresh.md | Title: Learn about asynchronous refresh for Azure Analysis Services models | Microsoft Docs description: Describes how to use the Azure Analysis Services REST API to code asynchronous refresh of model data. -+ Last updated 02/02/2022 |
analysis-services | Analysis Services Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-backup.md | Title: Learn about Azure Analysis Services database backup and restore | Microsoft Docs description: This article describes how to backup and restore model metadata and data from an Azure Analysis Services database. -+ Last updated 01/24/2023 |
analysis-services | Analysis Services Bcdr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-bcdr.md | Title: Learn about Azure Analysis Services high availability | Microsoft Docs description: This article describes how Azure Analysis Services provides high availability during service disruption. -+ Last updated 01/24/2023 |
analysis-services | Analysis Services Capacity Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-capacity-limits.md | Title: Learn about Azure Analysis Services resource and object limits | Microsoft Docs description: This article describes resource and object limits for an Azure Analysis Services server. -+ Last updated 01/24/2023 |
analysis-services | Analysis Services Connect Excel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-excel.md | Title: Learn how to connect to Azure Analysis Services with Excel | Microsoft Docs description: Learn how to connect to an Azure Analysis Services server by using Excel. Once connected, users can create PivotTables to explore data. -+ Last updated 01/24/2023 |
analysis-services | Analysis Services Connect Pbi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect-pbi.md | Title: Learn how to connect to Azure Analysis Services with Power BI | Microsoft Docs description: Learn how to connect to an Azure Analysis Services server by using Power BI. Once connected, users can explore model data. -+ Last updated 01/24/2023 |
analysis-services | Analysis Services Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-connect.md | Title: Learn about connecting to Azure Analysis Services servers| Microsoft Docs description: Learn how to connect to and get data from an Analysis Services server in Azure. -+ Last updated 01/24/2023 |
analysis-services | Analysis Services Create Bicep File | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-bicep-file.md | Title: Quickstart - Create an Azure Analysis Services server resource by using B description: Quickstart showing how to an Azure Analysis Services server resource by using a Bicep file. Last updated 03/08/2022 -+ tags: azure-resource-manager, bicep |
analysis-services | Analysis Services Create Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-powershell.md | |
analysis-services | Analysis Services Create Sample Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-sample-model.md | Title: Tutorial - Add a sample model- Azure Analysis Services | Microsoft Docs description: In this tutorial, learn how to add a sample model in Azure Analysis Services. -+ Last updated 01/26/2023 |
analysis-services | Analysis Services Create Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-server.md | |
analysis-services | Analysis Services Create Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-template.md | |
analysis-services | Analysis Services Database Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-database-users.md | Title: Learn how to manage database roles and users in Azure Analysis Services | Microsoft Docs description: Learn how to manage database roles and users on an Analysis Services server in Azure. -+ Last updated 01/27/2023 |
analysis-services | Analysis Services Datasource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-datasource.md | Title: Learn about data sources supported in Azure Analysis Services | Microsoft Docs description: Describes data sources and connectors supported for tabular 1200 and higher data models in Azure Analysis Services. -+ Last updated 01/27/2023 |
analysis-services | Analysis Services Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-deploy.md | Title: Learn how to deploy a model to Azure Analysis Services by using Visual Studio | Microsoft Docs description: Learn how to deploy a tabular model to an Azure Analysis Services server by using Visual Studio. -+ Last updated 01/27/2023 |
analysis-services | Analysis Services Gateway Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway-install.md | Title: Learn how to install On-premises data gateway for Azure Analysis Services | Microsoft Docs description: Learn how to install and configure an On-premises data gateway to connect to on-premises data sources from an Azure Analysis Services server. -+ Last updated 01/27/2023 |
analysis-services | Analysis Services Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway.md | Title: Learn about the On-premises data gateway for Azure Analysis Services | Microsoft Docs description: An On-premises gateway is necessary if your Analysis Services server in Azure will connect to on-premises data sources. -+ Last updated 01/27/2023 |
analysis-services | Analysis Services Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md | Title: Learn about diagnostic logging for Azure Analysis Services | Microsoft Docs description: Describes how to setup up logging to monitoring your Azure Analysis Services server. -+ Last updated 01/27/2023 |
analysis-services | Analysis Services Long Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-long-operations.md | Title: Learn about best practices for long running operations in Azure Analysis Services | Microsoft Docs description: This article describes best practices for long running operations. -+ Last updated 01/27/2023 |
analysis-services | Analysis Services Manage Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage-users.md | Title: Azure Analysis Services authentication and user permissions| Microsoft Docs description: This article describes how Azure Analysis Services uses Azure Active Directory (Azure AD) for identity management and user authentication. -+ Last updated 02/02/2022 |
analysis-services | Analysis Services Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-manage.md | Title: Manage Azure Analysis Services | Microsoft Docs description: This article describes the tools used to manage administration and management tasks for an Azure Analysis Services server. -+ Last updated 02/02/2022 |
analysis-services | Analysis Services Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-monitor.md | Title: Monitor Azure Analysis Services server metrics | Microsoft Docs description: Learn how Analysis Services use Azure Metrics Explorer, a free tool in the portal, to help you monitor the performance and health of your servers. -+ Last updated 03/04/2020 |
analysis-services | Analysis Services Odc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-odc.md | Title: Connect to Azure Analysis Services with an .odc file | Microsoft Docs description: Learn how to create an Office Data Connection file to connect to and get data from an Analysis Services server in Azure. -+ Last updated 04/27/2021 |
analysis-services | Analysis Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md | Title: What is Azure Analysis Services? description: Learn about Azure Analysis Services, a fully managed platform as a service (PaaS) that provides enterprise-grade data models in the cloud. -+ Last updated 02/15/2022 |
analysis-services | Analysis Services Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-powershell.md | Title: Manage Azure Analysis Services with PowerShell | Microsoft Docs description: Describes Azure Analysis Services PowerShell cmdlets for common administrative tasks such as creating servers, suspending operations, or changing service level. -+ Last updated 04/27/2021 |
analysis-services | Analysis Services Qs Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-qs-firewall.md | |
analysis-services | Analysis Services Refresh Azure Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-azure-automation.md | Title: Refresh Azure Analysis Services models with Azure Automation | Microsoft Docs description: This article describes how to code model refreshes for Azure Analysis Services by using Azure Automation. -+ Last updated 12/01/2020 |
analysis-services | Analysis Services Refresh Logic App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-logic-app.md | Title: Refresh with Logic Apps for Azure Analysis Services models | Microsoft Docs description: This article describes how to code asynchronous refresh for Azure Analysis Services by using Azure Logic Apps. -+ Last updated 10/30/2019 |
analysis-services | Analysis Services Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-samples.md | Title: Azure Analysis Services code, project, and database samples description: This article describes resources to learn about code, project, and database samples for Azure Analysis Services. -+ Last updated 04/27/2021 |
analysis-services | Analysis Services Scale Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-scale-out.md | Title: Azure Analysis Services scale-out| Microsoft Docs description: Replicate Azure Analysis Services servers with scale-out. Client queries can then be distributed among multiple query replicas in a scale-out query pool. -+ Last updated 04/27/2021 |
analysis-services | Analysis Services Server Admins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-admins.md | Title: Manage server admins in Azure Analysis Services | Microsoft Docs description: This article describes how to manage server administrators for an Azure Analysis Services server by using the Azure portal, PowerShell, or REST APIs. -+ Last updated 02/02/2022 |
analysis-services | Analysis Services Server Alias | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-server-alias.md | Title: Azure Analysis Services alias server names | Microsoft Docs description: Learn how to create Azure Analysis Services server name aliases. Users can then connect to your server with a shorter alias name instead of the server name. -+ Last updated 12/07/2021 |
analysis-services | Analysis Services Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-service-principal.md | Title: Automate Azure Analysis Services tasks with service principals | Microsoft Docs description: Learn how to create a service principal for automating Azure Analysis Services administrative tasks. -+ Last updated 02/02/2022 |
analysis-services | Analysis Services Vnet Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-vnet-gateway.md | Title: Configure Azure Analysis Services for VNet data sources | Microsoft Docs description: Learn how to configure an Azure Analysis Services server to use a gateway for data sources on Azure Virtual Network (VNet). -+ Last updated 02/02/2022 |
analysis-services | Move Between Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/move-between-regions.md | Title: Move Azure Analysis Services to a different region | Microsoft Docs description: Describes how to move an Azure Analysis Services resource to a different region. -+ Last updated 12/01/2020 |
analysis-services | Analysis Services Tutorial Pbid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/tutorials/analysis-services-tutorial-pbid.md | Title: Tutorial - Connect Azure Analysis Services with Power BI Desktop | Microsoft Docs description: In this tutorial, learn how to get an Analysis Services server name from the Azure portal and then connect to the server by using Power BI Desktop.-+ Last updated 02/02/2022 |
analysis-services | Analysis Services Tutorial Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/tutorials/analysis-services-tutorial-roles.md | Title: Tutorial - Configure Azure Analysis Services roles | Microsoft Docs description: In this tutorial, learn how to configure Azure Analysis Services administrator and user roles by using the Azure portal or SQL Server Management Studio. -+ Last updated 10/12/2021 |
app-service | Deploy Ftp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-ftp.md | Check that you've entered the correct [hostname](#get-ftps-endpoint) and [creden #### How can I connect to FTP in Azure App Service via passive mode? Azure App Service supports connecting via both Active and Passive mode. Passive mode is preferred because your deployment machines are usually behind a firewall (in the operating system or as part of a home or business network). See an [example from the WinSCP documentation](https://winscp.net/docs/ui_login_connection). +### How can I determine the method that was used to deploy my Azure App Service? +Let us say you take over owning an app and you wish to find out how the Azure App Service was deployed so you can make changes and deploy them. You can determine how an Azure App Service was deployed by checking the application settings. If the app was deployed using an external package URL, you will see the WEBSITE_RUN_FROM_PACKAGE setting in the application settings with a URL value. Or if it was deployed using zip deploy, you will see the WEBSITE_RUN_FROM_PACKAGE setting with a value of 1. If the app was deployed using Azure DevOps, you will see the deployment history in the Azure DevOps portal. If Azure Functions Core Tools was used, you will see the deployment history in the Azure portal. + ## More resources * [Local Git deployment to Azure App Service](deploy-local-git.md) |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | Title: Migrate to App Service Environment v3 by using the migration feature description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 02/16/2023 Last updated : 02/22/2023 At this time, the migration feature doesn't support migrations to App Service En ### Azure Government: - US DoD Central-- US Gov Arizona ### Azure China: |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md | Title: App Service Environment overview description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 02/15/2023 Last updated : 02/22/2023 App Service Environment v3 is available in the following regions: | US Gov Arizona | ✅ | | ✅ | | US Gov Iowa | | | ✅ | | US Gov Texas | ✅ | | ✅ |-| US Gov Virginia | ✅ | | ✅ | +| US Gov Virginia | ✅ |✅ | ✅ | ### Azure China: |
application-gateway | Configuration Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md | Subnet Size /24 = 256 IP addresses - 5 reserved from the platform = 251 availabl > It is possible to change the subnet of an existing Application Gateway within the same virtual network. You can do this using Azure PowerShell or Azure CLI. For more information, see [Frequently asked questions about Application Gateway](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway) ### Virtual network permission +Since application gateway resources are deployed within a virtual network, Application Gateway performs a check to verify the permission on the provided virtual network resource. This validation is performed during both creation and management operations. -Since application gateway resources are deployed within a virtual network resource, Application Gateway performs a check to verify the permission on the provided virtual network resource. This is verified during both create and manage operations. +You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify the users or service principals that operate application gateways have at least **Microsoft.Network/virtualNetworks/subnets/join/action** permission. Use built-in roles, such as [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor), which already support this permission. If a built-in role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md). Learn more about [managing subnet permissions](../virtual-network/virtual-network-manage-subnet.md#permissions). You may have to allow sufficient time for [Azure Resource Manager cache refresh](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after role assignment changes. -You should check your [Azure role-based access control](../role-based-access-control/role-assignments-list-portal.md) to verify that users or Service Principals who operate application gateways have at least **Microsoft.Network/virtualNetworks/subnets/join/action** or some higher permission such as the built-in [Network contributor](../role-based-access-control/built-in-roles.md) role on the virtual network. Visit [Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md) to know more on subnet permissions. +#### Identifying affected users or service principals for your subscription +By visiting Azure Advisor for your account, you can verify if your subscription has any users or service principals with insufficient permission. The details of that recommendation are as follows: -If a [built-in](../role-based-access-control/built-in-roles.md) role doesn't provide the right permission, you can [create and assign a custom role](../role-based-access-control/custom-roles-portal.md) for this purpose. Also, [allow sufficient time](../role-based-access-control/troubleshooting.md?tabs=bicep#symptomrole-assignment-changes-are-not-being-detected) after you make changes to a role assignments. +**Title**: Update VNet permission of Application Gateway users </br> +**Category**: Reliability </br> +**Impact**: High </br> ++#### Using temporary Azure Feature Exposure Control (AFEC) flag ++As a temporary extension, we have introduced a subscription-level [Azure Feature Exposure Control (AFEC)](../azure-resource-manager/management/preview-features.md?tabs=azure-portal) that you can register for, until you fix the permissions for all your users and/or service principals. [Set up this flag](../azure-resource-manager/management/preview-features.md?#required-access) for your Azure subscription. ++**Name**: Microsoft.Network/DisableApplicationGatewaySubnetPermissionCheck </br> +**Description**: Disable Application Gateway Subnet Permission Check </br> +**ProviderNamespace**: Microsoft.Network </br> +**EnrollmentType**: AutoApprove </br> > [!NOTE]-> As a temporary extension, we have introduced a subscription-level [Azure Feature Exposure Control (AFEC)](../azure-resource-manager/management/preview-features.md?tabs=azure-portal) flag to help you fix the permissions for all your users and/or service principals' permissions. Register for this interim feature on your own through a subscription owner, contributor, or custom role. </br> -> -> "**name**": "Microsoft.Network/DisableApplicationGatewaySubnetPermissionCheck", </br> -> "**description**": "Disable Application Gateway Subnet Permission Check", </br> -> "**providerNamespace**": "Microsoft.Network", </br> -> "**enrollmentType**": "AutoApprove" </br> -> -> The provision to circumvent the virtual network permission check by using this feature control is **available only for a limited period, until 6th April 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions. [Set up this flag in your Azure subscription](../azure-resource-manager/management/preview-features.md?tabs=azure-portal). +> The provision to circumvent the virtual network permission check by using this feature control (AFEC) is available only for a limited period, **until 6th April 2023**. Ensure all the roles and permissions managing Application Gateways are updated by then, as there will be no further extensions. Set up this flag in your Azure subscription. ## Network security groups |
automation | Automation Solution Vm Management Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-config.md | Title: Configure Azure Automation Start/Stop VMs during off-hours description: This article tells how to configure the Start/Stop VMs during off-hours feature to support different use cases or scenarios. Previously updated : 01/04/2023 Last updated : 02/23/2023 -> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon. +> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon. This article describes how to configure the [Start/Stop VMs during off-hours](automation-solution-vm-management.md) feature to support the described scenarios. You can also learn how to: |
automation | Automation Solution Vm Management Remove | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management-remove.md | Title: Remove Azure Automation Start/Stop VMs during off-hours overview description: This article describes how to remove the Start/Stop VMs during off-hours feature and unlink an Automation account from the Log Analytics workspace. Previously updated : 01/04/2023 Last updated : 02/23/2023 -> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon. +> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon. After you enable the Start/Stop VMs during off-hours feature to manage the running state of your Azure VMs, you may decide to stop using it. Removing this feature can be done using one of the following methods based on the supported deployment models: |
automation | Overview Monitoring Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md | Title: Azure Automation Change Tracking and Inventory overview using Azure Monit description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent (Preview), which helps you identify software and Microsoft service changes in your environment. Previously updated : 12/14/2022 Last updated : 02/23/2023 This article explains on the latest version of change tracking support using Azu ## Key benefits -- **Compatibility with the unified monitoring agent** - Compatible with the [Azure Monitor Agent (Preview)](/azure/azure-monitor/agents/agents-overview) that enhances security, reliability, and facilitates multi-homing experience to store data.+- **Compatibility with the unified monitoring agent** - Compatible with the [Azure Monitor Agent (Preview)](../../azure-monitor/agents/agents-overview.md) that enhances security, reliability, and facilitates multi-homing experience to store data. - **Compatibility with tracking tool**- Compatible with the Change tracking (CT) extension deployed through the Azure Policy on the client's virtual machine. You can switch to Azure Monitor Agent (AMA), and then the CT extension pushes the software, files, and registry to AMA.-- **Multi-homing experience** ΓÇô Provides standardization of management from one central workspace. You can [transition from Log Analytics (LA) to AMA](/azure/azure-monitor/agents/azure-monitor-agent-migration) so that all VMs point to a single workspace for data collection and maintenance.+- **Multi-homing experience** ΓÇô Provides standardization of management from one central workspace. You can [transition from Log Analytics (LA) to AMA](../../azure-monitor/agents/azure-monitor-agent-migration.md) +so that all VMs point to a single workspace for data collection and maintenance. - **Rules management** ΓÇô Uses [Data Collection Rules](https://azure.microsoft.com/updates/azure-monitor-agent-and-data-collection-rules-public-preview/) to configure or customize various aspects of data collection. For example, you can change the frequency of file collection. ## Current limitations |
automation | Region Mappings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md | -> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](/azure/azure-functions/start-stop-vms/overview), which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon. +> Start/Stop VM during off-hours, version 1 is going to retire soon by CY23 and is unavailable in the marketplace now. We recommend that you start using [version 2](../../azure-functions/start-stop-vms/overview.md) which is now generally available. The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until retirement in CY23. The details on announcement will be shared soon. In Azure Automation, you can enable the Update Management, Change Tracking and Inventory, and Start/Stop VMs during off-hours features for your servers and virtual machines. These features have a dependency on a Log Analytics workspace, and therefore require linking the workspace with an Automation account. However, only certain regions are supported to link them together. In general, the mapping is *not* applicable if you plan to link an Automation account to a workspace that won't have these features enabled. |
automation | Migrate Existing Agent Based Hybrid Worker To Extension Based Workers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md | For at-scale migration of multiple Agent based Hybrid Workers, you can also use #### [Bicep template](#tab/bicep-template) -You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](/azure/azure-resource-manager/bicep/overview) +You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](../azure-resource-manager/bicep/overview.md). ```Bicep param automationAccount string |
automation | Update Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md | Title: Troubleshoot Azure Automation Update Management issues description: This article tells how to troubleshoot and resolve issues with Azure Automation Update Management. Previously updated : 06/10/2021 Last updated : 02/23/2023 Deploying updates to Linux by classification ("Critical and security updates") h ### KB2267602 is consistently missing -KB2267602 is the [Windows Defender definition update](https://www.microsoft.com/wdsi/definitions). It's updated daily. +KB2267602 is the [Windows Defender definition update](https://www.microsoft.com/en-us/wdsi/defenderupdates). It's updated daily. ## Next steps |
azure-app-configuration | Quickstart Java Spring App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md | Title: Quickstart to learn how to use Azure App Configuration description: In this quickstart, create a Java Spring app with Azure App Configuration to centralize storage and management of application settings separate from your code. ms.devlang: java Previously updated : 05/02/2022 Last updated : 02/22/2023 #Customer intent: As a Java Spring developer, I want to manage all my app settings in one place. + # Quickstart: Create a Java Spring app with Azure App Configuration In this quickstart, you incorporate Azure App Configuration into a Java Spring app to centralize storage and management of application settings separate from your code. In this quickstart, you incorporate Azure App Configuration into a Java Spring a - Azure subscription - [create one for free](https://azure.microsoft.com/free/) - A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 11. - [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above.+- A Spring Boot application. If you don't have one, create a Maven project with the [Spring Initializr](https://start.spring.io/). Be sure to select **Maven Project** and, under **Dependencies**, add the **Spring Web** dependency, and then select Java version 8 or higher. ## Create an App Configuration store [!INCLUDE [azure-app-configuration-create](../../includes/azure-app-configuration-create.md)] -7. Select **Configuration Explorer** > **+ Create** > **Key-value** to add the following key-value pairs: -- | Key | Value | - ||| - | /application/config.message | Hello | -- Leave **Label** and **Content Type** empty for now. --8. Select **Apply**. --## Create a Spring Boot app +9. Select **Configuration Explorer** > **+ Create** > **Key-value** to add the following key-value pairs: -To create a new Spring Boot project: + | Key | Value | + ||| + | /application/config.message | Hello | -1. Browse to the [Spring Initializr](https://start.spring.io). + Leave **Label** and **Content Type** empty for now. -1. Specify the following options: -- - Generate a **Maven** project with **Java**. - - Specify a **Spring Boot** version that's equal to or greater than 2.0. - - Specify the **Group** and **Artifact** names for your application. - - Add the **Spring Web** dependency. --1. After you specify the previous options, select **Generate Project**. When prompted, download the project to a path on your local computer. +10. Select **Apply**. ## Connect to an App Configuration store -1. After you extract the files on your local system, your simple Spring Boot application is ready for editing. Locate the *pom.xml* file in the root directory of your app. --1. Open the *pom.xml* file in a text editor, and add the Spring Cloud Azure Config starter to the list of `<dependencies>`: -- **Spring Boot 2.6** +Now that you have an App Configuration store, you can use the Spring Cloud Azure Config starter to have your application communicate with the App Configuration store that you create. - ```xml - <dependency> - <groupId>com.azure.spring</groupId> - <artifactId>azure-spring-cloud-appconfiguration-config</artifactId> - <version>2.6.0</version> - </dependency> - ``` +To install the Spring Cloud Azure Config starter module, add the following dependency to your *pom.xml* file: - > [!NOTE] - > If you need to support an older version of Spring Boot see our [old library](https://github.com/Azure/azure-sdk-for-jav). +```xml +<dependency> + <groupId>com.azure.spring</groupId> + <artifactId>azure-spring-cloud-appconfiguration-config</artifactId> + <version>2.11.0</version> +</dependency> +``` -1. Create a new Java file named *MessageProperties.java* in the package directory of your app. Add the following lines: +> [!NOTE] +> If you need to support an older version of Spring Boot, see our [old library](https://github.com/Azure/azure-sdk-for-jav). - ```java - package com.example.demo; +### Code the application - import org.springframework.boot.context.properties.ConfigurationProperties; +To use the Spring Cloud Azure Config starter to have your application communicate with the App Configuration store that you create, configure the application by using the following steps. - @ConfigurationProperties(prefix = "config") - public class MessageProperties { - private String message; +1. Create a new Java file named *MessageProperties.java*, and add the following lines: - public String getMessage() { - return message; - } + ```java + import org.springframework.boot.context.properties.ConfigurationProperties; - public void setMessage(String message) { - this.message = message; - } - } - ``` + @ConfigurationProperties(prefix = "config") + public class MessageProperties { + private String message; -1. Create a new Java file named *HelloController.java* in the package directory of your app. Add the following lines: + public String getMessage() { + return message; + } - ```java - package com.example.demo; + public void setMessage(String message) { + this.message = message; + } + } + ``` - import org.springframework.web.bind.annotation.GetMapping; - import org.springframework.web.bind.annotation.RestController; +1. Create a new Java file named *HelloController.java*, and add the following lines: - @RestController - public class HelloController { - private final MessageProperties properties; + ```java + import org.springframework.web.bind.annotation.GetMapping; + import org.springframework.web.bind.annotation.RestController; - public HelloController(MessageProperties properties) { - this.properties = properties; - } + @RestController + public class HelloController { + private final MessageProperties properties; - @GetMapping - public String getMessage() { - return "Message: " + properties.getMessage(); - } - } - ``` + public HelloController(MessageProperties properties) { + this.properties = properties; + } -1. Open the main application Java file, and add `@EnableConfigurationProperties` to enable this feature. + @GetMapping + public String getMessage() { + return "Message: " + properties.getMessage(); + } + } + ``` - ```java - import org.springframework.boot.context.properties.EnableConfigurationProperties; +1. In the main application Java file, add `@EnableConfigurationProperties` to enable the *MessageProperties.java* configuration properties class to take effect and register it with the Spring container. - @SpringBootApplication - @EnableConfigurationProperties(MessageProperties.class) - public class DemoApplication { - public static void main(String[] args) { - SpringApplication.run(DemoApplication.class, args); - } - } - ``` + ```java + import org.springframework.boot.context.properties.EnableConfigurationProperties; -1. Open the auto-generated unit test and update to disable Azure App Configuration, or it will try to load from the service when runnings unit tests. + @SpringBootApplication + @EnableConfigurationProperties(MessageProperties.class) + public class DemoApplication { + public static void main(String[] args) { + SpringApplication.run(DemoApplication.class, args); + } + } + ``` - ```java - package com.example.demo; +1. Open the auto-generated unit test and update to disable Azure App Configuration, or it will try to load from the service when running unit tests. - import org.junit.jupiter.api.Test; - import org.springframework.boot.test.context.SpringBootTest; + ```java + import org.junit.jupiter.api.Test; + import org.springframework.boot.test.context.SpringBootTest; - @SpringBootTest(properties = "spring.cloud.azure.appconfiguration.enabled=false") - class DemoApplicationTests { + @SpringBootTest(properties = "spring.cloud.azure.appconfiguration.enabled=false") + class DemoApplicationTests { - @Test - void contextLoads() { - } + @Test + void contextLoads() { + } - } - ``` + } + ``` -1. Create a new file named `bootstrap.properties` under the resources directory of your app, and add the following line to the file. +1. Create a new file named *bootstrap.properties* under the resources directory of your app, and add the following line to the file. - ```CLI - spring.cloud.azure.appconfiguration.stores[0].connection-string= ${APP_CONFIGURATION_CONNECTION_STRING} - ``` + ```properties + spring.cloud.azure.appconfiguration.stores[0].connection-string= ${APP_CONFIGURATION_CONNECTION_STRING} + ``` 1. Set an environment variable named **APP_CONFIGURATION_CONNECTION_STRING**, and set it to the access key to your App Configuration store. At the command line, run the following command and restart the command prompt to allow the change to take effect: - ```cmd - setx APP_CONFIGURATION_CONNECTION_STRING "connection-string-of-your-app-configuration-store" - ``` + ```cmd + setx APP_CONFIGURATION_CONNECTION_STRING "connection-string-of-your-app-configuration-store" + ``` - If you use Windows PowerShell, run the following command: + If you use Windows PowerShell, run the following command: - ```azurepowershell - $Env:APP_CONFIGURATION_CONNECTION_STRING = "connection-string-of-your-app-configuration-store" - ``` + ```azurepowershell + $Env:APP_CONFIGURATION_CONNECTION_STRING = "connection-string-of-your-app-configuration-store" + ``` - If you use macOS or Linux, run the following command: + If you use macOS or Linux, run the following command: - ```cmd - export APP_CONFIGURATION_CONNECTION_STRING='connection-string-of-your-app-configuration-store' - ``` + ```cmd + export APP_CONFIGURATION_CONNECTION_STRING='connection-string-of-your-app-configuration-store' + ``` -## Build and run the app locally +### Build and run the app locally 1. Open command prompt to the root directory and run the following commands to build your Spring Boot application with Maven and run it. - ```cmd - mvn clean package - mvn spring-boot:run - ``` + ```cmd + mvn clean package + mvn spring-boot:run + ``` -2. After your application is running, use *curl* to test your application, for example: +1. After your application is running, use *curl* to test your application, for example: - ```cmd - curl -X GET http://localhost:8080/ - ``` + ```cmd + curl -X GET http://localhost:8080/ + ``` - You see the message that you entered in the App Configuration store. + You see the message that you entered in the App Configuration store. ## Clean up resources |
azure-arc | Tutorial Workload Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-workload-management.md | + + Title: 'Tutorial: Workload management in a multi-cluster environment with GitOps' +description: This tutorial walks through typical use-cases that Platform and Application teams face on a daily basis working with Kubernetes workloads in a multi-cluster environment. +keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, ci/cd, devops" ++++ Last updated : 02/23/2023++++# Tutorial: Workload management in a multi-cluster environment with GitOps ++Enterprise organizations, developing cloud native applications, face challenges to deploy, configure and promote a great variety of applications and services across a fleet of Kubernetes clusters at scale. This fleet may include Azure Kubernetes Service (AKS) clusters as well as clusters running on other public cloud providers or in on-premises data centers that are connected to Azure through the Azure Arc. ++This tutorial walks you through typical scenarios of the workload deployment and configuration in a multi-cluster Kubernetes environment. First, you deploy a sample infrastructure with a few GitHub repositories and AKS clusters. Next, you work through a set of use cases where you act as different personas working in the same environment: the Platform Team and the Application Team. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Onboard a new application +> * Schedule an application on the cluster types +> * Promote an application across rollout environemnts +> * Build and deploy an application +> * Provide platform configurations +> * Add a new cluster type to your environment ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++## Prerequisites ++In order to successfully deploy the sample, you need: ++- [Azure CLI](/cli/azure/install-azure-cli). +- [GitHub CLI](https://cli.github.com) +- [Helm](https://helm.sh/docs/helm/helm_install/) +- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) ++## 1 - Deploy the sample ++To deploy the sample, run the following script: ++```bash +mkdir kalypso && cd kalypso +curl -fsSL -o deploy.sh https://raw.githubusercontent.com/microsoft/kalypso/main/deploy/deploy.sh +chmod 700 deploy.sh +./deploy.sh -c -p <prefix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2> +``` ++This script may take 10-15 minutes to complete. After it's done, it reports the execution result in the output like this: ++```output +Deployment is complete! ++Created repositories: + - https://github.com/eedorenko/kalypso-control-plane + - https://github.com/eedorenko/kalypso-gitops + - https://github.com/eedorenko/kalypso-app-src + - https://github.com/eedorenko/kalypso-app-gitops ++Created AKS clusters in kalypso-rg resource group: + - control-plane + - drone (Flux based workload cluster) + - large (ArgoCD based workload cluster) + +``` ++> [!NOTE] +> If something goes wrong with the deployment, you can delete the created resources with the following command: +> +> ```bash +> ./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2> +> ``` ++### Sample overview ++This deployment script created an infrastructure, shown in the following diagram: +++There are a few Platform Team repositories: ++- [Control Plane](https://github.com/microsoft/kalypso-control-plane): Contains a platform model defined with high level abstractions such as environments, cluster types, applications and services, mapping rules and configurations, and promotion workflows. +- [Platform GitOps](https://github.com/microsoft/kalypso-gitops): Contains final manifests that represent the topology of the fleet, such as which cluster types are available in each environment, what workloads are scheduled on them, and what platform configuration values are set. +- [Services Source](https://github.com/microsoft/kalypso-svc-src): Contains high-level manifest templates of sample dial-tone platform services. +- [Services GitOps](https://github.com/microsoft/kalypso-svc-gitops): Contains final manifests of sample dial-tone platform services to be deployed across the clusters. ++The infrastructure also includes a couple of the Application Team repositories: ++- [Application Source](https://github.com/microsoft/kalypso-app-src): Contains a sample application source code, including Docker files, manifest templates and CI/CD workflows. +- [Application GitOps](https://github.com/microsoft/kalypso-app-gitops): Contains final sample application manifests to be deployed to the deployment targets. ++The script created the following Azure Kubernetes Service (AKS) clusters: ++- `control-plane` - This cluster is a management cluster that doesn't run any workloads. The `control-plane` cluster hosts [Kalypso Scheduler](https://github.com/microsoft/kalypso-scheduler) operator that transforms high level abstractions from the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repository to the raw Kubernetes manifests in the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository. +- `drone` - A sample workload cluster. This cluster has the [GitOps extension](conceptual-gitops-flux2.md) installed and it uses `Flux` to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository. For this sample, the `drone` cluster can represent an Azure Arc-enabled cluster or an AKS cluster with the Flux/GitOps extension. +- `large` - A sample workload cluster. This cluster has `ArgoCD` installed on it to reconcile manifests from the [Platform GitOps](https://github.com/microsoft/kalypso-gitops) repository. ++### Explore Control Plane ++The `control plane` repository contains three branches: `main`, `dev` and `stage`. The `dev` and `stage` branches contain configurations that are specific for `Dev` and `Stage` environments. On the other hand, the `main` branch doesn't represent any specific environment. The content of the `main` branch is common and used by all environments in the fleet. Any change to the `main` branch is a subject to be promoted across environments. For example, a new application or a new template can be promoted to the `Stage` environment only after successful testing on the `Dev` environment. ++The `main` branch: ++|Folder|Description| +||--| +|.github/workflows| Contains GitHub workflows that implement the promotional flow.| +|.environments| Contains a list of environments with pointers to the branches with the environment configurations.| +|templates| Contains manifest templates for various reconcilers (for example, Flux and ArgoCD) and a template for the workload namespace.| +|workloads| Contains a list of onboarded applications and services with pointers to the corresponding GitOps repositories.| ++The `dev` and `stage` branches: ++|Item|Description| +|-|--| +|cluster-types| Contains a list of available cluster types in the environment. The cluster types are grouped in custom subfolders. Each cluster type is marked with a set of labels. It specifies a reconciler type that it uses to fetch the manifests from GitOps repositories. The subfolders also contain a number of config maps with the platform configuration values available on the cluster types.| +|configs/dev-config.yaml| Contains config maps with the platform configuration values applicable for all cluster types in the environment.| +|scheduling| Contains scheduling policies that map workload deployment targets to the cluster types in the environment.| +|base-repo.yaml| A pointer to the place in the `Control Plane` repository (`main`) from where the scheduler should take templates and workload registrations.| +|gitops-repo.yaml| A pointer to the place in the `Platform GitOps` repository to where the scheduler should PR generated manifests.| ++> [!TIP] +> The folder structure in the `Control Plane` repository doesn't really matter. This tutorial provides a sample of how you can organize files in the repository, but feel free to do it in your own preferred way. The scheduler is interested in the content of the files, rather than where the files are located. ++## 2 - Platform Team: Onboard a new application ++The Application Team runs their software development lifecycle. They build their application and promote it across environments. They're not aware of what cluster types are available in the fleet and where their application will be deployed. But they do know that they want to deploy their application in `Dev` environment for functional and performance testing and in `Stage` environment for UAT testing. + +The Application Team describes this intention in the [workload](https://github.com/microsoft/kalypso-app-src/blob/main/workload/workload.yaml) file in the [Application Source](https://github.com/microsoft/kalypso-app-src) repository: ++```yaml +apiVersion: scheduler.kalypso.io/v1alpha1 +kind: Workload +metadata: + name: hello-world-app + labels: + type: application + family: force +spec: + deploymentTargets: + - name: functional-test + labels: + purpose: functional-test + edge: "true" + environment: dev + manifests: + repo: https://github.com/microsoft/kalypso-app-gitops + branch: dev + path: ./functional-test + - name: performance-test + labels: + purpose: performance-test + edge: "false" + environment: dev + manifests: + repo: https://github.com/microsoft/kalypso-app-gitops + branch: dev + path: ./performance-test + - name: uat-test + labels: + purpose: uat-test + environment: stage + manifests: + repo: https://github.com/microsoft/kalypso-app-gitops + branch: stage + path: ./uat-test +``` ++This file contains a list of three deployment targets. These targets are marked with custom labels and point to the folders in [Application GitOps](https://github.com/microsoft/kalypso-app-gitops) repository where the Application Team generates application manifests for each deployment target. ++With this file, Application Team requests Kubernetes compute resources from the Platform Team. In response, the Platform Team must register the application in the [Control Plane](https://github.com/microsoft/kalypso-control-plane) repo. + +To register the application, open a terminal and use the following script: ++```bash +export org=<github org> +export prefix=<prefix> ++# clone the control-plane repo +git clone https://github.com/$org/$prefix-control-plane control-plane +cd control-plane ++# create workload registration file ++cat <<EOF >workloads/hello-world-app.yaml +apiVersion: scheduler.kalypso.io/v1alpha1 +kind: WorkloadRegistration +metadata: + name: hello-world-app + labels: + type: application +spec: + workload: + repo: https://github.com/$org/$prefix-app-src + branch: main + path: workload/ + workspace: kaizen-app-team +EOF ++git add . +git commit -m 'workload registration' +git push +``` ++> [!NOTE] +> For simplicity, this tutorial pushes changes directly to `main`. In practice, you'd create a pull request to submit the changes. ++With that in place, the application is onboarded in the control plane. But the control plane still doesn't know how to map the application deployment targets to the cluster types in the fleet. ++### Define application scheduling policy on Dev ++The Platform Team must define how the application deployment targets will be scheduled on cluster types in the `Dev` environment. To do this, submit scheduling policies for the `functional-test` and `performance-test` deployment targets with the following script: ++```bash +# Switch to dev branch (representing Dev environemnt) in the control-plane folder +git checkout dev +mkdir -p scheduling/kaizen ++# Create a scheduling policy for the functional-test deployment target +cat <<EOF >scheduling/kaizen/functional-test-policy.yaml +apiVersion: scheduler.kalypso.io/v1alpha1 +kind: SchedulingPolicy +metadata: + name: functional-test-policy +spec: + deploymentTargetSelector: + workspace: kaizen-app-team + labelSelector: + matchLabels: + purpose: functional-test + edge: "true" + clusterTypeSelector: + labelSelector: + matchLabels: + restricted: "true" + edge: "true" +EOF ++# Create a scheduling policy for the performance-test deployment target +cat <<EOF >scheduling/kaizen/performance-test-policy.yaml +apiVersion: scheduler.kalypso.io/v1alpha1 +kind: SchedulingPolicy +metadata: + name: performance-test-policy +spec: + deploymentTargetSelector: + workspace: kaizen-app-team + labelSelector: + matchLabels: + purpose: performance-test + edge: "false" + clusterTypeSelector: + labelSelector: + matchLabels: + size: large +EOF ++git add . +git commit -m 'application scheduling policies' +git config pull.rebase false +git pull --no-edit +git push +``` ++The first policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: functional-test` and `edge: "true"` should be scheduled on all environment cluster types that are marked with label `restricted: "true"`. You can treat a workspace as a group of applications produced by an application team. ++The second policy states that all deployment targets from the `kaizen-app-team` workspace, marked with labels `purpose: performance-test` and `edge: "false"` should be scheduled on all environment cluster types that are marked with label `size: "large"`. ++This push to the `dev` branch triggers the scheduling process and creates a PR to the `dev` branch in the `Platform GitOps` repository: +++Besides `Promoted_Commit_id`, which is just tracking information for the promotion CD flow, the PR contains assignment manifests. The `functional-test` deployment target is assigned to the `drone` cluster type, and the `performance-test` deployment target is assigned to the `large` cluster type. Those manifests will land in `drone` and `large` folders that contain all assignments to these cluster types in the `Dev` environment. + +The `Dev` environment also includes `command-center` and `small` cluster types: ++ :::image type="content" source="media/tutorial-workload-management/dev-cluster-types.png" alt-text="Screenshot showing cluster types in the Dev environment."::: ++However, only the `drone` and `large` cluster types were selected by the scheduling policies that you defined. ++### Understand deployment target assignment manifests ++Before you continue, take a closer look at the generated assignment manifests for the `functional-test` deployment target. There are `namespace.yaml`, `config.yaml` and `reconciler.yaml` manifest files. ++`namespace.yaml` defines a namespace that will be created on any `drone` cluster where the `hello-world` application runs. + +```yaml +apiVersion: v1 +kind: Namespace +metadata: + labels: + deploymentTarget: hello-world-app-functional-test + environment: dev + someLabel: some-value + workload: hello-world-app + workspace: kaizen-app-team + name: dev-kaizen-app-team-hello-world-app-functional-test +``` ++`config.yaml` contains all platform configuration values available on any `drone` cluster that the application can use in the `Dev` environment. + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: platform-config + namespace: dev-kaizen-app-team-hello-world-app-functional-test +data: + CLUSTER_NAME: Drone + DATABASE_URL: mysql://restricted-host:3306/mysqlrty123 + ENVIRONMENT: Dev + REGION: East US + SOME_COMMON_ENVIRONMENT_VARIABLE: "false" +``` ++`reconciler.yaml` contains Flux resources that a `drone` cluster uses to fetch application manifests, prepared by the Application Team for the `functional-test` deployment target. + +```yaml +apiVersion: source.toolkit.fluxcd.io/v1beta2 +kind: GitRepository +metadata: + name: hello-world-app-functional-test + namespace: flux-system +spec: + interval: 30s + ref: + branch: dev + secretRef: + name: repo-secret + url: https://github.com/<github org>/<prefix>-app-gitops ++apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 +kind: Kustomization +metadata: + name: hello-world-app-functional-test + namespace: flux-system +spec: + interval: 30s + path: ./functional-test + prune: true + sourceRef: + kind: GitRepository + name: hello-world-app-functional-test + targetNamespace: dev-kaizen-app-team-hello-world-app-functional-test +``` ++> [!NOTE] +> The `control plane` defines that the `drone` cluster type uses `Flux` to reconcile manifests from the application GitOps repositories. The `large` cluster type, on the other hand, reconciles manifests with `ArgoCD`. Therefore `reconciler.yaml` for the `performance-test` deployment target will look differently and contain `ArgoCD` resources. ++### Promote application to Stage ++Once you approve and merge the PR to the `Platform GitOps` repository, the `drone` and `large` AKS clusters that represent corresponding cluster types start fetching the assignment manifests. The `drone` cluster has [GitOps extension](conceptual-gitops-flux2.md) installed, pointing to the `Platform GitOps` repository. It reports its `compliance` status to Azure Resource Graph: +++The PR merging event starts a GitHub workflow `checkpromote` in the `control plane` repository. This workflow waits until all clusters with the [GitOps extension](conceptual-gitops-flux2.md) installed that are looking at the `dev` branch in the `Platform GitOps` repository are compliant with the PR commit. In this tutorial, the only such cluster is `drone`. +++Once the `checkpromote` is successful, it starts the `cd` workflow that promotes the change (application registration) to the `Stage` environment. For better visibility, it also updates the git commit status in the `control plane` repository: ++ + :::image type="content" source="media/tutorial-workload-management/dev-git-commit-status.png" alt-text="Screenshot showing git commit status deploying to dev."::: ++> [!NOTE] +> If the `drone` cluster fails to reconcile the assignment manifests for any reason, the promotion flow will fail. The commit status will be marked as failed, and the application registration will not be promoted to the `Stage` environment. ++Next, configure a scheduling policy for the `uat-test` deployment target in the stage environment: ++```bash +# Switch to stage branch (representing Stage environemnt) in the control-plane folder +git checkout stage +mkdir -p scheduling/kaizen ++# Create a scheduling policy for the uat-test deployment target +cat <<EOF >scheduling/kaizen/uat-test-policy.yaml +apiVersion: scheduler.kalypso.io/v1alpha1 +kind: SchedulingPolicy +metadata: + name: uat-test-policy +spec: + deploymentTargetSelector: + workspace: kaizen-app-team + labelSelector: + matchLabels: + purpose: uat-test + clusterTypeSelector: + labelSelector: {} +EOF ++git add . +git commit -m 'application scheduling policies' +git config pull.rebase false +git pull --no-edit +git push +``` ++The policy states that all deployment targets from the `kaizen-app-team` workspace marked with labels `purpose: uat-test` should be scheduled on all cluster types defined in the environment. ++Pushing this policy to the `stage` branch triggers the scheduling process, which creates a PR with the assignment manifests to the `Platform GitOps` repository, similar to those for the `Dev` environment. ++As in the case with the `Dev` environment, after reviewing and merging the PR to the `Platform GitOps` repository, the `checkpromote` workflow in the `control plane` repository waits until clusters with the [GitOps extension](conceptual-gitops-flux2.md) (`drone`) reconcile the assignment manifests. ++ :::image type="content" source="media/tutorial-workload-management/check-promote-to-stage.png" alt-text="Screenshot showing promotion to stage."::: ++On successful execution, the commit status is updated. +++## 3 - Application Dev Team: Build and deploy application ++The Application Team regularly submits pull requests to the `main` branch in the `Application Source` repository. Once a PR is merged to `main`, it starts a CI/CD workflow. In this tutorial, the workflow will be started manually. + + Go to the `Application Source` repository in GitHub. On the `Actions` tab, select `Run workflow`. +++The workflow performs the following actions: ++- Builds the application Docker image and pushes it to the GitHub repository package. +- Generates manifests for the `functional-test` and `performance-test` deployment targets. It uses configuration values from the `dev-configs` branch. The generated manifests are added to a pull request and auto-merged in the `dev` branch. +- Generates manifests for the `uat-test` deployment target. It uses configuration values from the `stage-configs` branch. +++The generated manifests are added to a pull request to the `stage` branch waiting for approval: +++To test the application manually on the `Dev` environment before approving the PR to the `Stage` environment, first verify how the `functional-test` application instance works on the `drone` cluster: ++```bash +kubectl port-forward svc/hello-world-service -n dev-kaizen-app-team-hello-world-app-functional-test 9090:9090 --context=drone ++# output: +# Forwarding from 127.0.0.1:9090 -> 9090 +# Forwarding from [::1]:9090 -> 9090 ++``` ++While this command is running, open `localhost:9090` in your browser. You'll see the following greeting page: +++The next step is to check how the `performance-test` instance works on the `large` cluster: ++```bash +kubectl port-forward svc/hello-world-service -n dev-kaizen-app-team-hello-world-app-performance-test 8080:8080 --context=large ++# output: +# Forwarding from 127.0.0.1:8080 -> 8080 +# Forwarding from [::1]:8080 -> 8080 ++``` ++This time, use `8080` port and open `localhost:8080` in your browser. ++Once you're satisfied with the `Dev` environment, approve and merge the PR to the `Stage` environment. After that, test the `uat-test` application instance in the `Stage` environment on both clusters. ++Run the following command for the `drone` cluster and open `localhost:8001` in your browser: + +```bash +kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8001:8000 --context=drone +``` ++Run the following command for the `large` cluster and open `localhost:8002` in your browser: ++ ```bash +kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8002:8000 --context=large +``` ++> [!NOTE] +> It may take up to three minutes to reconcile the changes from the application GitOps repository on the `large` cluster. ++The application instance on the `large` cluster shows the following greeting page: ++ :::image type="content" source="media/tutorial-workload-management/stage-greeting-page.png" alt-text="Screenshot showing the greeting page on stage."::: ++## 4 - Platform Team: Provide platform configurations ++Applications in the fleet grab the data from the very same database in both `Dev` and `Stage` environments. Let's change it and configure `west-us` clusters to provide a different database url for the applications working in the `Stage` environment: ++```bash +# Switch to stage branch (representing Stage environemnt) in the control-plane folder +git checkout stage ++# Update a config map with the configurations for west-us clusters +cat <<EOF >cluster-types/west-us/west-us-config.yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: west-us-config + labels: + platform-config: "true" + region: west-us +data: + REGION: West US + DATABASE_URL: mysql://west-stage:8806/mysql2 +EOF ++git add . +git commit -m 'database url configuration' +git config pull.rebase false +git pull --no-edit +git push +``` ++The scheduler scans all config maps in the environment and collects values for each cluster type based on label matching. Then, it puts a `platform-config` config map in every deployment target folder in the `Platform GitOps` repository. The `platform-config` config map contains all of the platform configuration values that the workload can use on this cluster type in this environment. ++In a few seconds, a new PR to the `stage` branch in the `Platform GitOps` repository appears: +++Approve the PR and merge it. ++The `large` cluster is handled by ArgoCD, which, by default, is configured to reconcile every three minutes. This cluster doesn't report its compliance state to Azure like the clusters such as `drone` that have the [GitOps extension](conceptual-gitops-flux2.md). However, you can still monitor the reconciliation state on the cluster with ArgoCD UI. ++To access the ArgoCD UI on the `large` cluster, run the following command: ++```bash +# Get ArgoCD username and password +echo "ArgoCD username: admin, password: $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" --context large| base64 -d)" +# output: +# ArgoCD username: admin, password: eCllTELZdIZfApPL ++kubectl port-forward svc/argocd-server 8080:80 -n argocd --context large +``` ++Next, open `localhost:8080` in your browser and provide the username and password printed by the script. You'll see a web page similar to this one: ++ :::image type="content" source="media/tutorial-workload-management/argocd-ui.png" alt-text="Screenshot showing the Argo CD user interface web page." lightbox="media/tutorial-workload-management/argocd-ui.png"::: ++Select the `stage` tile to see more details on the reconciliation state from the `stage` branch to this cluster. You can select the `SYNC` buttons to force the reconciliation and speed up the process. ++Once the new configuration has arrived to the cluster, check the `uat-test` application instance at `localhost:8002` after +running the following commands: ++```bash +kubectl rollout restart deployment hello-world-deployment -n stage-kaizen-app-team-hello-world-app-uat-test --context=large +kubectl port-forward svc/hello-world-service -n stage-kaizen-app-team-hello-world-app-uat-test 8002:8000 --context=large +``` ++You'll see the updated database url: +++## 5 - Platform Team: Add cluster type to environment ++Currently, only `drone` and `large` cluster types are included in the `Stage` environment. Let's include the `small` cluster type to `Stage` as well. Even though there's no physical cluster representing this cluster type, you can see how the scheduler reacts to this change. ++```bash +# Switch to stage branch (representing Stage environemnt) in the control-plane folder +git checkout stage ++# Add "small" cluster type in west-us region +mkdir -p cluster-types/west-us/small +cat <<EOF >cluster-types/west-us/small/small-cluster-type.yaml +apiVersion: scheduler.kalypso.io/v1alpha1 +kind: ClusterType +metadata: + name: small + labels: + region: west-us + size: small +spec: + reconciler: argocd + namespaceService: default +EOF ++git add . +git commit -m 'add new cluster type' +git config pull.rebase false +git pull --no-edit +git push +``` ++In a few seconds, the scheduler submits a PR to the `Platform GitOps` repository. According to the `uat-test-policy` that you created, it assigns the `uat-test` deployment target to the new cluster type, as it's supposed to work on all available cluster types in the environment. +++## Clean up resources +When no longer needed, delete the resources that you created for this tutorial. To do so, run the following command: ++```bash +# In kalypso folder +./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2> +``` ++## Next steps ++In this tutorial, you have performed tasks for a few of the most common workload management scenarios in a multi-cluster Kubernetes environment. There are many other scenarios you may want to explore. Continue to use the sample and see how you can implement use cases that are most common in your daily activities. ++To understand the underlying concepts and mechanics deeper, refer to the following resources: ++> [!div class="nextstepaction"] +> - [Workload Management in Multi-cluster environment with GitOps](https://github.com/microsoft/kalypso) + |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | This page is updated monthly, so revisit it regularly. If you're looking for ite ### Fixed -- The extension service now correctly restarts when the Azure Connected Machine agent is being upgraded by Update Management Center+- The extension service now correctly restarts when the Azure Connected Machine agent is upgraded by Update Management Center - Resolved issues with the hybrid connectivity component that could result in the "himds" service crashing, the server showing as "disconnected" in Azure, and connectivity issues with Windows Admin Center and SSH - Improved handling of resource move scenarios that could impact Windows Admin Center and SSH connectivity - Improved reliability when changing the [agent configuration mode](security-overview.md#local-agent-security-controls) from "monitor" mode to "full" mode. - Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Microsoft Sentinel DNS extension to improve log collection reliability-- Tenant IDs are now validated during onboarding for correctness+- Tenant IDs are better validated when connecting the server ## Version 1.26 - January 2023 > [!NOTE]-> Version 1.26 is only available for Linux operating systems. The most recent Windows agent version is 1.25. +> Version 1.26 is only available for Linux operating systems. ### Fixed |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 11/18/2022 Last updated : 01/25/2023 # Connected Machine agent prerequisites -This topic describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have additional requirements. +This topic describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have more requirements. ## Supported environments Azure Arc-enabled servers support the installation of the Connected Machine agen * Azure Stack HCI * Other cloud environments -Azure Arc-enabled servers do not support installing the agent on virtual machines running in Azure, or on virtual machines running on Azure Stack Hub or Azure Stack Edge, as they are already modeled as Azure VMs and able to be managed directly in Azure. +You shouldn't install Azure Arc on virtual machines hosted in Azure, Azure Stack Hub, or Azure Stack Edge, as they already have similar capabilities. You can, however, [use an Azure VM to simulate an on-premises environment](plan-evaluate-on-azure-virtual-machine.md) for testing purposes, only. ++Take extra care when using Azure Arc on systems that are: ++* Cloned +* Restored from backup as a second instance of the server +* Used to create a "golden image" from which other virtual machines are created ++If two agents use the same configuration, you will encounter inconsistent behaviors when both agents try to act as one Azure resource. The best practice for these situations is to use an automation tool or script to onboard the server to Azure Arc after it has been cloned, restored from backup, or created from a golden image. > [!NOTE]-> For additional information on using Arc-enabled servers in VMware environments, see the [VMware FAQ](vmware-faq.md). +> For additional information on using Azure Arc-enabled servers in VMware environments, see the [VMware FAQ](vmware-faq.md). ## Supported operating systems -The following versions of the Windows and Linux operating system are officially supported for the Azure Connected Machine agent. Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, are not supported operating environments. +Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. Azure Arc does not run on x86 (32-bit) or ARM-based architectures. * Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported- * Azure Editions are supported when running as a virtual machine on Azure Stack HCI + * Azure Editions are supported on Azure Stack HCI +* Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) * Windows IoT Enterprise * Azure Stack HCI * Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS The following versions of the Windows and Linux operating system are officially * Amazon Linux 2 * Oracle Linux 7 and 8 -> [!NOTE] -> On Linux, Azure Arc-enabled servers install several daemon processes. We only support using systemd to manage these processes. In some environments, systemd may not be installed or available, in which case Arc-enabled servers are not supported, even if the distribution is otherwise supported. These environments include **Windows Subsystem for Linux** (WSL) and most container-based systems, such as Kubernetes or Docker. The Azure Connected Machine agent can be installed on the node that runs the containers but not inside the containers themselves. +### Client operating system guidance -> [!WARNING] -> If the Linux hostname or Windows computer name uses a reserved word or trademark, attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md). +The Azure Arc service and Azure Connected Machine Agent are supported on Windows 10 and 11 client operating systems only when using those computers in a server-like environment. That is, the computer should always be: -> [!NOTE] -> While Azure Arc-enabled servers support Amazon Linux, the following features are not supported by this distribution: -> -> * The Dependency agent used by Azure Monitor VM insights -> * Azure Automation Update Management +* Connected to the internet +* Connected to a power source +* Powered on ++For example, a computer running Windows 11 that's responsible for digital signage, point-of-sale solutions, and general back office management tasks is a good candidate for Azure Arc. End-user productivity machines, such as a laptop, which may go offline for long periods of time, shouldn't use Azure Arc and instead should consider [Microsoft Intune](/mem/intune) or [Microsoft Endpoint Configuration Manager](/mem/configmgr). ++### Short-lived servers and virtual desktop infrastructure ++Microsoft doesn't recommend running Azure Arc on short-lived (ephemeral) servers or virtual desktop infrastructure (VDI) VMs. Azure Arc is designed for long-term management of servers and isn't optimized for scenarios where you are regularly creating and deleting servers. For example, Azure Arc doesn't know if the agent is offline due to planned system maintenance or if the VM was deleted, so it won't automatically clean up server resources that stopped sending heartbeats. As a result, you could encounter a conflict if you re-create the VM with the same name and there's an existing Azure Arc resource with the same name. ++[Azure Virtual Desktop on Azure Stack HCI](../../virtual-desktop/azure-stack-hci-overview.md) doesn't use short-lived VMs and supports running Azure Arc in the desktop VMs. ## Software requirements Windows operating systems: -* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers). -* Windows PowerShell 4.0 or later is required. No action is required for Windows Server 2012 R2 and above. For Windows Server 2008 R2 SP1, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616). +* NET Framework 4.6 or later. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers). +* Windows PowerShell 4.0 or later (already included with Windows Server 2012 R2 and later). For Windows Server 2008 R2 SP1, [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616). Linux operating systems: Linux operating systems: ## Required permissions -The following Azure built-in roles are required for different aspects of managing connected machines: +You'll need the following Azure built-in roles for different aspects of managing connected machines: -* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group in which the machines will be managed. +* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group where you're managing the servers. * To read, modify, and delete a machine, you must have the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) role for the resource group.-* To select a resource group from the drop-down list when using the **Generate script** method, as well as the permissions needed to onboard machines, listed above, you must additionally have the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role which includes **Reader** access). +* To select a resource group from the drop-down list when using the **Generate script** method, you'll also need the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role that includes **Reader** access). ## Azure subscription and service limits There are no limits to the number of Azure Arc-enabled servers you can register in any single resource group, subscription or tenant. -Each Azure Arc-enabled server is associated with an Azure Active Directory object and will count against your directory quota. See [Azure AD service limits and restrictions](../../active-directory/enterprise-users/directory-service-limits-restrictions.md) for information about the maximum number of objects you can have in an Azure AD directory. +Each Azure Arc-enabled server is associated with an Azure Active Directory object and counts against your directory quota. See [Azure AD service limits and restrictions](../../active-directory/enterprise-users/directory-service-limits-restrictions.md) for information about the maximum number of objects you can have in an Azure AD directory. ## Azure resource providers To use Azure Arc-enabled servers, the following [Azure resource providers](../.. * **Microsoft.HybridConnectivity** * **Microsoft.AzureArcData** (if you plan to Arc-enable SQL Servers) -If these resource providers are not already registered, you can register them using the following commands: +You can register the resource providers using the following commands: Azure PowerShell: Set-AzContext -SubscriptionId [subscription you want to onboard] Register-AzResourceProvider -ProviderNamespace Microsoft.HybridCompute Register-AzResourceProvider -ProviderNamespace Microsoft.GuestConfiguration Register-AzResourceProvider -ProviderNamespace Microsoft.HybridConnectivity+Register-AzResourceProvider -ProviderNamespace Microsoft.AzureArcData ``` Azure CLI: az account set --subscription "{Your Subscription Name}" az provider register --namespace 'Microsoft.HybridCompute' az provider register --namespace 'Microsoft.GuestConfiguration' az provider register --namespace 'Microsoft.HybridConnectivity'+az provider register --namespace 'Microsoft.AzureArcData' ``` You can also register the resource providers in the [Azure portal](../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal). |
azure-functions | Durable Functions Instance Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md | public HttpResponseMessage httpStartAndWait( try { String timeoutString = req.getQueryParameters().get("timeout"); Integer timeoutInSeconds = Integer.parseInt(timeoutString);- OrchestrationMetadata orchestration = client.waitForInstanceStart( + OrchestrationMetadata orchestration = client.waitForInstanceCompletion( instanceId, Duration.ofSeconds(timeoutInSeconds), true /* getInputsAndOutputs */); |
azure-government | Compare Azure Government Global Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md | The following Azure Database for PostgreSQL **features aren't currently availabl - Advanced Threat Protection - Backup with long-term retention -### [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) --The following Azure SQL Managed Instance **features aren't currently available** in Azure Government: --- Long-term backup retention- ## Developer tools This section outlines variations and considerations when using Developer tools in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=load-testing,app-configuration,devtest-lab,lab-services,azure-devops®ions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). |
azure-maps | How To Create Data Registries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md | To create a data registry: 1. Once you have the body of your HTTP request ready, execute the following **HTTP PUT request**: ```http- https://us.atlas.microsoft.com/dataRegistries/{udid}?api-version=2022-12-01-preview&subscription-key={Azure-Maps-Subscription-key} + https://us.atlas.microsoft.com/dataRegistries/{udid}?api-version=2022-12-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} ``` Once you've uploaded one or more files to an Azure storage account, created and Use the `udid` to get the content of a file registered in an Azure Maps account: ```http-https://us.atlas.microsoft.com/dataRegistries/{udid}/content?api-version=2022-12-01-preview&subscription-key={Azure-Maps-Subscription-key} +https://us.atlas.microsoft.com/dataRegistries/{udid}/content?api-version=2022-12-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} ``` The contents of the file will appear in the body of the response. For example, a text based GeoJSON file will appear similar to the following example: |
azure-maps | How To Creator Wayfinding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md | The Azure Maps Creator [wayfinding service][wayfinding service] allows you to na > > - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services][how to manage access to creator services]. > - In the URL examples in this article you will need to:-> - Replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +> - Replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. > - Replace `{datasetId`} with your `datasetId`. For more information, see the [Check the dataset creation status][check dataset creation status] section of the *Use Creator to create indoor maps* tutorial. ## Create a routeset |
azure-maps | How To Dataset Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md | Azure Maps Creator enables users to import their indoor map data in GeoJSON form >[!IMPORTANT] > > - This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services).-> - In the URL examples in this article you will need to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +> - In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ## Create dataset using the GeoJSON package |
azure-maps | How To Render Custom Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md | To get a static image with custom pins and labels: 4. Select the **GET** HTTP method. -5. Enter the following URL (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +5. Enter the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12¢er=-73.98,%2040.77&pins=custom%7Cla15+50%7Cls12%7Clc003b61%7C%7C%27CentralPark%27-73.9657974+40.781971%7C%7Chttps%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FAzureMapsCodeSamples%2Fmaster%2FAzureMapsCodeSamples%2FCommon%2Fimages%2Ficons%2Fylw-pushpin.png To upload pins and path data: 4. Select the **POST** HTTP method. -5. Enter the following URL (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +5. Enter the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson To check the status of the data upload and retrieve its unique ID (`udid`): 4. Select the **GET** HTTP method. -5. Enter the `status URL` you copied in [Upload pins and path data](#upload-pins-and-path-data). The request should look like the following URL (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +5. Enter the `status URL` you copied in [Upload pins and path data](#upload-pins-and-path-data). The request should look like the following URL (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://us.atlas.microsoft.com/mapData/operations/{statusUrl}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To render the uploaded pins and path data on the map: 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `udid` with the `udid` of the uploaded data): +5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded data): ```HTTP https://atlas.microsoft.com/map/static/png?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&layer=basic&style=main&zoom=12¢er=-73.96682739257812%2C40.78119135317995&pins=default|la-35+50|ls12|lc003C62|co9B2F15||'Times Square'-73.98516297340393 40.758781646381024|'Central Park'-73.96682739257812 40.78119135317995&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.30||udid-{udId} To render a polygon with color and opacity: 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&sku=S1&zoom=14&height=500&Width=500¢er=-74.040701, 40.698666&path=lc0000FF|fc0000FF|lw3|la0.80|fa0.50||-74.03995513916016 40.70090237454063|-74.04082417488098 40.70028420372218|-74.04113531112671 40.70049568385827|-74.04298067092896 40.69899904076542|-74.04271245002747 40.69879568992435|-74.04367804527283 40.6980961582905|-74.04364585876465 40.698055487620714|-74.04368877410889 40.698022951066996|-74.04168248176573 40.696444909137|-74.03901100158691 40.69837271818651|-74.03824925422668 40.69837271818651|-74.03809905052185 40.69903971085914|-74.03771281242369 40.699340668780984|-74.03940796852112 40.70058515602143|-74.03948307037354 40.70052821920425|-74.03995513916016 40.70090237454063 To render a circle and pushpins with custom labels: 4. Select the **GET** HTTP method. -5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +5. Enter the following URL to the [Render Service](/rest/api/maps/render/get-map-image) (replace {`Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://atlas.microsoft.com/map/static/png?api-version=1.0&style=main&layer=basic&zoom=14&height=700&Width=700¢er=-122.13230609893799,47.64599069048016&path=lcFF0000|lw2|la0.60|ra1000||-122.13230609893799 47.64599069048016&pins=default|la15+50|al0.66|lc003C62|co002D62||'Microsoft Corporate Headquarters'-122.14131832122801 47.64690503939462|'Microsoft Visitor Center'-122.136828 47.642224|'Microsoft Conference Center'-122.12552547454833 47.642940335653996|'Microsoft The Commons'-122.13687658309935 47.64452336193245&subscription-key={Your-Azure-Maps-Subscription-key} |
azure-maps | How To Request Elevation Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-elevation-data.md | To request elevation data in raster tile format using the Postman app: ``` >[!Important]- >For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. + >For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. 5. Select the **Send** button. To create the request: 3. Enter a **Request name** for the request. -4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```http https://atlas.microsoft.com/elevation/point/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&points=-73.998672,40.714728|150.644,-34.397 To create the request: } ``` -6. Now, we'll call the [Post Data for Points API](/rest/api/maps/elevation/postdataforpoints) to get elevation data for the same two points. On the **Builder** tab, select the **POST** HTTP method and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +6. Now, we'll call the [Post Data for Points API](/rest/api/maps/elevation/postdataforpoints) to get elevation data for the same two points. On the **Builder** tab, select the **POST** HTTP method and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```http https://atlas.microsoft.com/elevation/point/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0 To create the request: 3. Enter a **Request name**. -4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```http https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&lines=-73.998672,40.714728|150.644,-34.397&samples=5 To create the request: } ``` -9. Now, we'll call the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. On the **Builder** tab, select the **POST** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +9. Now, we'll call the [Post Data For Polyline API](/rest/api/maps/elevation/postdataforpolyline) to get elevation data for the same three points. On the **Builder** tab, select the **POST** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```http https://atlas.microsoft.com/elevation/line/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&samples=5 To create the request: 3. Enter a **Request name**. -4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +4. On the **Builder** tab, select the **GET** HTTP method, and then enter the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```http https://atlas.microsoft.com/elevation/lattice/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&bounds=-121.66853362143818, 46.84646479863713,-121.65853362143818, 46.85646479863713&rows=2&columns=3 |
azure-maps | How To Request Weather Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md | In this example, you'll use the [Get Current Conditions API](/rest/api/maps/weat 1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ```http https://atlas.microsoft.com/weather/currentConditions/json?api-version=1.0&query=47.60357,-122.32945&subscription-key={Your-Azure-Maps-Subscription-key} In this example, you'll use the [Get Severe Weather Alerts API](/rest/api/maps/w 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ```http https://atlas.microsoft.com/weather/severe/alerts/json?api-version=1.0&query=41.161079,-104.805450&subscription-key={Your-Azure-Maps-Subscription-key} In this example, you'll use the [Get Daily Forecast API](/rest/api/maps/weather/ 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ```http https://atlas.microsoft.com/weather/forecast/daily/json?api-version=1.0&query=47.60357,-122.32945&duration=5&subscription-key={Your-Azure-Maps-Subscription-key} In this example, you'll use the [Get Hourly Forecast API](/rest/api/maps/weather 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ```http https://atlas.microsoft.com/weather/forecast/hourly/json?api-version=1.0&query=47.60357,-122.32945&duration=12&subscription-key={Your-Azure-Maps-Subscription-key} In this example, you'll use the [Get Minute Forecast API](/rest/api/maps/weather 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ```http https://atlas.microsoft.com/weather/forecast/minute/json?api-version=1.0&query=47.60357,-122.32945&interval=15&subscription-key={Your-Azure-Maps-Subscription-key} |
azure-maps | How To Search For Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md | In this example, we'll use the Azure Maps [Get Search Address API](/rest/api/map 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. In this request, we're searching for a specific address: `400 Braod St, Seattle, WA 98109`. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +2. Select the **GET** HTTP method in the builder tab and enter the following URL. In this request, we're searching for a specific address: `400 Braod St, Seattle, WA 98109`. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ```http https://atlas.microsoft.com/search/address/json?&subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&language=en-US&query=400 Broad St, Seattle, WA 98109 In this example, we'll use Fuzzy Search to search the entire world for `pizza`. 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ```http https://atlas.microsoft.com/search/fuzzy/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=pizza In this example, we'll be making reverse searches using a few of the optional pa 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. The request should look like the following URL: +2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. The request should look like the following URL: ```http https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700&number=1 In this example, we'll search for a cross street based on the coordinates of an 1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. The request should look like the following URL: +2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. The request should look like the following URL: ```http https://atlas.microsoft.com/search/address/reverse/crossstreet/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700 |
azure-maps | Indoor Map Dynamic Styling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md | In the next section, we'll set the occupancy *state* of office `UNIT26` to `true 3. Enter a **Request name** for the request, such as *POST Data Upload*. -4. Enter the following URL to the [Feature Update States API](/rest/api/maps/v2/feature-state/update-states) (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `statesetId` with the `statesetId`): +4. Enter the following URL to the [Feature Update States API](/rest/api/maps/v2/feature-state/update-states) (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `statesetId` with the `statesetId`): ```http https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} |
azure-maps | Tutorial Create Store Locator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md | To add the JavaScript: ``` -3. Add the following initialization code. Make sure to replace `<Your Azure Maps Key>` with your primary subscription key. +3. Add the following initialization code. Make sure to replace `<Your Azure Maps Key>` with your Azure Maps subscription key. > [!Tip] > When you use pop-up windows, it's best to create a single `Popup` instance and reuse the instance by updating its content and position. For every `Popup`instance you add to your code, multiple DOM elements are added to the page. The more DOM elements there are on a page, the more things the browser has to keep track of. If there are too many items, the browser might become slow. |
azure-maps | Tutorial Creator Feature Stateset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-feature-stateset.md | This tutorial uses the [Postman](https://www.postman.com/) application, but you > > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). > * In the URL examples in this article you will need to replace:-> * `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +> * `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. > * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status](tutorial-creator-indoor-maps.md#check-the-dataset-creation-status) section of the *Use Creator to create indoor maps* tutorial ## Create a feature stateset |
azure-maps | Tutorial Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md | This tutorial uses the [Postman](https://www.postman.com/) application, but you >[!IMPORTANT] > > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services).-> * In the URL examples in this article you will need to replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +> * In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ## Upload a Drawing package To convert a drawing package: 4. Select the **POST** HTTP method. -5. Enter the following URL to the [Conversion Service](/rest/api/maps/v2/conversion/convert) (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key and `udid` with the `udid` of the uploaded package): +5. Enter the following URL to the [Conversion Service](/rest/api/maps/v2/conversion/convert) (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `udid` with the `udid` of the uploaded package): ```http https://us.atlas.microsoft.com/conversions?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&udid={udid}&inputType=DWG&outputOntology=facility-2.0 |
azure-maps | Tutorial Creator Wfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-wfs.md | This tutorial uses the [Postman](https://www.postman.com/) application, but you > > * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services](how-to-manage-creator.md#access-to-creator-services). > * In the URL examples in this article you will need to replace:-> * `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +> * `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. > * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status](tutorial-creator-indoor-maps.md#check-the-dataset-creation-status) section of the *Use Creator to create indoor maps* tutorial ## Query for feature collections |
azure-maps | Tutorial Geofence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md | To upload the geofencing GeoJSON data: 4. Select the **POST** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson To check the status of the GeoJSON data and retrieve its unique ID (`udid`): 4. Select the **GET** HTTP method. -5. Enter the `status URL` you copied in [Upload Geofencing GeoJSON data](#upload-geofencing-geojson-data). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +5. Enter the `status URL` you copied in [Upload Geofencing GeoJSON data](#upload-geofencing-geojson-data). The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```HTTP https://us.atlas.microsoft.com/mapData/{operationId}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} To retrieve content metadata: 4. Select the **GET** HTTP method. -5. Enter the `resource Location URL` you copied in [Check the GeoJSON data upload status](#check-the-geojson-data-upload-status). The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key): +5. Enter the `resource Location URL` you copied in [Check the GeoJSON data upload status](#check-the-geojson-data-upload-status). The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key): ```http https://us.atlas.microsoft.com/mapData/metadata/{udid}?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key} Each of the following sections makes API requests by using the five different lo 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.638237&lon=-122.1324831&searchBuffer=5&isAsync=True&mode=EnterAndExit In the preceding GeoJSON response, the negative distance from the main site geof 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udId={udId}&lat=47.63800&lon=-122.132531&searchBuffer=5&isAsync=True&mode=EnterAndExit In the preceding GeoJSON response, the equipment has remained in the main site g 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63810783315048&lon=-122.13336020708084&searchBuffer=5&isAsync=True&mode=EnterAndExit In the preceding GeoJSON response, the equipment has remained in the main site g 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.637988&userTime=2023-01-16&lon=-122.1338344&searchBuffer=5&isAsync=True&mode=EnterAndExit In the preceding GeoJSON response, the equipment has remained in the main site g 4. Select the **GET** HTTP method. -5. Enter the following URL. The request should look like the following URL (replace `{Azure-Maps-Primary-Subscription-key}` with your primary subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). +5. Enter the following URL. The request should look like the following URL (replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key, and `{udid}` with the `udid` you saved in the [Upload Geofencing GeoJSON data section](#upload-geofencing-geojson-data)). ```HTTP https://atlas.microsoft.com/spatial/geofence/json?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&deviceId=device_01&udid={udid}&lat=47.63799&lon=-122.134505&searchBuffer=5&isAsync=True&mode=EnterAndExit |
azure-maps | Tutorial Iot Hub Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md | Follow these steps to upload the geofence by using the Azure Maps Data Upload AP 1. Open the Postman app, select **New** again. In the **Create New** window, select **HTTP Request**, and enter a request name for the request. -2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API. Make sure to replace `{Your-Azure-Maps-Primary-Subscription-key}` with your primary subscription key. +2. Select the **POST** HTTP method in the builder tab, and enter the following URL to upload the geofence to the Data Upload API. Make sure to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ```HTTP https://us.atlas.microsoft.com/mapData?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0&dataFormat=geojson |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | In addition to consolidating and improving upon legacy Log Analytics agents, Azu 3. **Validate** that Azure Monitor Agent is collecting data as expected and all **downstream dependencies**, such as dashboards, alerts, and workbooks, function properly. -4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents** as applicable - - If you have migrated to Azure Monitor agent for selected features/solutions and you need to continue using the legacy Log Analytics for others, you can selectively disable or "turn off" legacy agent collection by editing the Log Analytics workspace configurations directly - - If you've migrated to Azure Monitor agent for all your requirements, you may [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. - - Don't uninstall the legacy agent if you need to use it for uploading data to System Center Operations Manager. +4. Clean up: After you confirm that Azure Monitor Agent is collecting data properly, you may **choose to either disable or uninstall the legacy Log Analytics agents** + 1. If you have migrated to Azure Monitor agent for selected features/solutions and you need to continue using the legacy Log Analytics for others, you can + 2. If you've migrated to Azure Monitor agent for all your requirements, you may [uninstall the Log Analytics agent](./agent-manage.md#uninstall-agent) from monitored resources. Clean up any configuration files, workspace keys, or certificates that were used previously by the Log Analytics agent. + 3. Don't uninstall the legacy agent if you need to use it for uploading data to System Center Operations Manager. <sup>1</sup> The DCR generator only converts the configurations for Windows event logs, Linux syslog and performance counters. Support for additional features and solutions will be available soon |
azure-monitor | Annotations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/annotations.md | -Annotations show where you deployed a new build, or other significant events. Annotations make it easy to see whether your changes had any effect on your application's performance. They can be automatically created by the [Azure Pipelines](/azure/devops/pipelines/tasks/) build system. You can also create annotations to flag any event you like by creating them from PowerShell. +Annotations show where you deployed a new build or other significant events. Annotations make it easy to see whether your changes had any effect on your application's performance. They can be automatically created by the [Azure Pipelines](/azure/devops/pipelines/tasks/) build system. You can also create annotations to flag any event you want by creating them from PowerShell. ## Release annotations with Azure Pipelines build Release annotations are a feature of the cloud-based Azure Pipelines service of If all the following criteria are met, the deployment task creates the release annotation automatically: -- The resource you're deploying to is linked to Application Insights (via the `APPINSIGHTS_INSTRUMENTATIONKEY` app setting).-- The Application Insights resource is in the same subscription as the resource you're deploying to.+- The resource to which you're deploying is linked to Application Insights via the `APPINSIGHTS_INSTRUMENTATIONKEY` app setting. +- The Application Insights resource is in the same subscription as the resource to which you're deploying. - You're using one of the following Azure DevOps pipeline tasks: | Task code | Task name | Versions | If all the following criteria are met, the deployment task creates the release a | AzureWebApp | Azure Web App | Any | > [!NOTE]-> If youΓÇÖre still using the Application Insights annotation deployment task, you should delete it. +> If you're still using the Application Insights annotation deployment task, you should delete it. ### Configure release annotations -If you can't use one the deployment tasks in the previous section, then you need to add an inline script task in your deployment pipeline. +If you can't use one of the deployment tasks in the previous section, you need to add an inline script task in your deployment pipeline. -1. Navigate to a new or existing pipeline and select a task. - :::image type="content" source="./media/annotations/task.png" alt-text="Screenshot of task in stages selected." lightbox="./media/annotations/task.png"::: +1. Go to a new or existing pipeline and select a task. ++ :::image type="content" source="./media/annotations/task.png" alt-text="Screenshot that shows a task selected under Stages." lightbox="./media/annotations/task.png"::: 1. Add a new task and select **Azure CLI**.- :::image type="content" source="./media/annotations/add-azure-cli.png" alt-text="Screenshot of adding a new task and selecting Azure CLI." lightbox="./media/annotations/add-azure-cli.png"::: -1. Specify the relevant Azure subscription. Change the **Script Type** to *PowerShell* and **Script Location** to *Inline*. -1. Add the [PowerShell script from step 2 in the next section](#create-release-annotations-with-azure-cli) to **Inline Script**. -1. Add the arguments below, replacing the angle-bracketed placeholders with your values to **Script Arguments**. The -releaseProperties are optional. ++ :::image type="content" source="./media/annotations/add-azure-cli.png" alt-text="Screenshot that shows adding a new task and selecting Azure CLI." lightbox="./media/annotations/add-azure-cli.png"::: +1. Specify the relevant Azure subscription. Change **Script Type** to **PowerShell** and **Script Location** to **Inline**. +1. Add the [PowerShell script from step 2 in the next section](#create-release-annotations-with-the-azure-cli) to **Inline Script**. +1. Add the following arguments. Replace the angle-bracketed placeholders with your values to **Script Arguments**. The `-releaseProperties` are optional. ```powershell -aiResourceId "<aiResourceId>" ` If you can't use one the deployment tasks in the previous section, then you need :::image type="content" source="./media/annotations/inline-script.png" alt-text="Screenshot of Azure CLI task settings with Script Type, Script Location, Inline Script, and Script Arguments highlighted." lightbox="./media/annotations/inline-script.png"::: - Below is an example of metadata you can set in the optional releaseProperties argument using [build](/azure/devops/pipelines/build/variables#build-variables-devops-services) and [release](/azure/devops/pipelines/release/variables#default-variablesrelease) variables. - + The following example shows metadata you can set in the optional `releaseProperties` argument by using [build](/azure/devops/pipelines/build/variables#build-variables-devops-services) and [release](/azure/devops/pipelines/release/variables#default-variablesrelease) variables. ```powershell -releaseProperties @{ If you can't use one the deployment tasks in the previous section, then you need "TeamFoundationCollectionUri"="$(System.TeamFoundationCollectionUri)" } ``` -1. Save. +1. Select **Save**. -## Create release annotations with Azure CLI +## Create release annotations with the Azure CLI -You can use the CreateReleaseAnnotation PowerShell script to create annotations from any process you like, without using Azure DevOps. +You can use the `CreateReleaseAnnotation` PowerShell script to create annotations from any process you want without using Azure DevOps. -1. Sign into [Azure CLI](/cli/azure/authenticate-azure-cli). +1. Sign in to the [Azure CLI](/cli/azure/authenticate-azure-cli). -2. Make a local copy of the script below and call it CreateReleaseAnnotation.ps1. +1. Make a local copy of the following script and call it `CreateReleaseAnnotation.ps1`. ```powershell param( You can use the CreateReleaseAnnotation PowerShell script to create annotations # Invoke-AzRestMethod -Path "$aiResourceId/Annotations?api-version=2015-05-01" -Method PUT -Payload $body ``` - [!NOTE] - Your annotations must have **Category** set to **Deployment** in order to be displayed in the Azure Portal. + > [!NOTE] + > Your annotations must have **Category** set to **Deployment** to appear in the Azure portal. -3. Call the PowerShell script with the following code, replacing the angle-bracketed placeholders with your values. The -releaseProperties are optional. +1. Call the PowerShell script with the following code. Replace the angle-bracketed placeholders with your values. The `-releaseProperties` are optional. ```powershell .\CreateReleaseAnnotation.ps1 ` You can use the CreateReleaseAnnotation PowerShell script to create annotations "TriggerBy"="<Your name>" } ``` -|Argument | Definition | Note| -|--|--|--| -|aiResourceId | The Resource ID to the target Application Insights resource. | Example:<br> /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyRGName/providers/microsoft.insights/components/MyResourceName| -|releaseName | The name to give the created release annotation. | | -|releaseProperties | Used to attach custom metadata to the annotation. | Optional| --+ |Argument | Definition | Note| + |--|--|--| + |`aiResourceId` | The resource ID to the target Application Insights resource. | Example:<br> /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/MyRGName/providers/microsoft.insights/components/MyResourceName| + |`releaseName` | The name to give the created release annotation. | | + |`releaseProperties` | Used to attach custom metadata to the annotation. | Optional| + ## View annotations > [!NOTE]-> Release annotations are not currently available in the Metrics pane of Application Insights +> Release annotations aren't currently available in the **Metrics** pane of Application Insights. -Now, whenever you use the release template to deploy a new release, an annotation is sent to Application Insights. The annotations can be viewed in the following locations: +Whenever you use the release template to deploy a new release, an annotation is sent to Application Insights. You can view annotations in the following locations: -- Performance+- **Performance:** - :::image type="content" source="./media/annotations/performance.png" alt-text="Screenshot of the Performance tab with a release annotation selected(blue arrow) to show the Release Properties tab." lightbox="./media/annotations/performance.png"::: + :::image type="content" source="./media/annotations/performance.png" alt-text="Screenshot that shows the Performance tab with a release annotation selected to show the Release Properties tab." lightbox="./media/annotations/performance.png"::: -- Failures+- **Failures:** - :::image type="content" source="./media/annotations/failures.png" alt-text="Screenshot of the Failures tab with a release annotation (blue arrow) selected to show the Release Properties tab." lightbox="./media/annotations/failures.png"::: -- Usage+ :::image type="content" source="./media/annotations/failures.png" alt-text="Screenshot that shows the Failures tab with a release annotation selected to show the Release Properties tab." lightbox="./media/annotations/failures.png"::: +- **Usage:** - :::image type="content" source="./media/annotations/usage-pane.png" alt-text="Screenshot of the Users tab bar with release annotations selected. Release annotations appear as blue arrows above the chart indicating the moment in time that a release occurred." lightbox="./media/annotations/usage-pane.png"::: + :::image type="content" source="./media/annotations/usage-pane.png" alt-text="Screenshot that shows the Users tab bar with release annotations selected. Release annotations appear as blue arrows above the chart indicating the moment in time that a release occurred." lightbox="./media/annotations/usage-pane.png"::: -- Workbooks+- **Workbooks:** - In any log-based workbook query where the visualization displays time along the x-axis. + In any log-based workbook query where the visualization displays time along the x-axis: - :::image type="content" source="./media/annotations/workbooks-annotations.png" alt-text="Screenshot of workbooks pane with time series log-based query with annotations displayed." lightbox="./media/annotations/workbooks-annotations.png"::: + :::image type="content" source="./media/annotations/workbooks-annotations.png" alt-text="Screenshot that shows the Workbooks pane with a time series log-based query with annotations displayed." lightbox="./media/annotations/workbooks-annotations.png"::: - To enable annotations in your workbook, go to **Advanced Settings** and select **Show annotations**. +To enable annotations in your workbook, go to **Advanced Settings** and select **Show annotations**. - :::image type="content" source="./media/annotations/workbook-show-annotations.png" alt-text="Screenshot of Advanced Settings menu with the show annotations checkbox highlighted."::: Select any annotation marker to open details about the release, including requestor, source control branch, release pipeline, and environment. -## Release annotations using API keys +## Release annotations by using API keys Release annotations are a feature of the cloud-based Azure Pipelines service of Azure DevOps. > [!IMPORTANT]-> Annotations using API keys is deprecated. We recommend using [Azure CLI](#create-release-annotations-with-azure-cli) instead. +> Annotations using API keys is deprecated. We recommend using the [Azure CLI](#create-release-annotations-with-the-azure-cli) instead. ### Install the annotations extension (one time) -To be able to create release annotations, you'll need to install one of the many Azure DevOps extensions available in the Visual Studio Marketplace. +To create release annotations, install one of the many Azure DevOps extensions available in Visual Studio Marketplace. 1. Sign in to your [Azure DevOps](https://azure.microsoft.com/services/devops/) project.- -1. On the Visual Studio Marketplace [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization, and then select **Install** to add the extension to your Azure DevOps organization. - -  - ++1. On the **Visual Studio Marketplace** [Release Annotations extension](https://marketplace.visualstudio.com/items/ms-appinsights.appinsightsreleaseannotations) page, select your Azure DevOps organization. Select **Install** to add the extension to your Azure DevOps organization. ++  + You only need to install the extension once for your Azure DevOps organization. You can now configure release annotations for any project in your organization. -### Configure release annotations using API keys +### Configure release annotations by using API keys Create a separate API key for each of your Azure Pipelines release templates. 1. Sign in to the [Azure portal](https://portal.azure.com) and open the Application Insights resource that monitors your application. Or if you don't have one, [create a new Application Insights resource](create-workspace-resource.md).- + 1. Open the **API Access** tab and copy the **Application Insights ID**.- -  ++  1. In a separate browser window, open or create the release template that manages your Azure Pipelines deployments.++1. Select **Add task** and then select the **Application Insights Release Annotation** task from the menu. -1. Select **Add task**, and then select the **Application Insights Release Annotation** task from the menu. - -  +  > [!NOTE]- > The Release Annotation task currently supports only Windows-based agents; it won't run on Linux, macOS, or other types of agents. - + > The Release Annotation task currently supports only Windows-based agents. It won't run on Linux, macOS, or other types of agents. + 1. Under **Application ID**, paste the Application Insights ID you copied from the **API Access** tab.- -  - -1. Back in the Application Insights **API Access** window, select **Create API Key**. - -  - -1. In the **Create API key** window, type a description, select **Write annotations**, and then select **Generate key**. Copy the new key. - -  - ++  ++1. Back in the Application Insights **API Access** window, select **Create API Key**. ++  ++1. In the **Create API key** window, enter a description, select **Write annotations**, and then select **Generate key**. Copy the new key. ++  + 1. In the release template window, on the **Variables** tab, select **Add** to create a variable definition for the new API key. -1. Under **Name**, enter `ApiKey`, and under **Value**, paste the API key you copied from the **API Access** tab. - -  - -1. Select **Save** in the main release template window to save the template. +1. Under **Name**, enter **ApiKey**. Under **Value**, paste the API key you copied from the **API Access** tab. +  ++1. Select **Save** in the main release template window to save the template. > [!NOTE] > Limits for API keys are described in the [REST API rate limits documentation](/rest/api/yammer/rest-api-rate-limits). ### Transition to the new release annotation -To use the new release annotations: +To use the new release annotations: 1. [Remove the Release Annotations extension](/azure/devops/marketplace/uninstall-disable-extensions).-1. Remove the Application Insights Release Annotation task in your Azure Pipelines deployment. -1. Create new release annotations with [Azure Pipelines](#release-annotations-with-azure-pipelines-build) or [Azure CLI](#create-release-annotations-with-azure-cli). +1. Remove the Application Insights Release Annotation task in your Azure Pipelines deployment. +1. Create new release annotations with [Azure Pipelines](#release-annotations-with-azure-pipelines-build) or the [Azure CLI](#create-release-annotations-with-the-azure-cli). ## Next steps |
azure-monitor | Azure Web Apps Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-java.md | To manually update, follow these steps: 2. Disable Application Insights via the Application Insights tab in the Azure portal. -3. Once the agent jar file is uploaded, go to App Service configurations and add a new environment variable, `JAVA_OPTS`, with the value `-javaagent:{PATH_TO_THE_AGENT_JAR}/applicationinsights-agent-{VERSION_NUMBER}.jar`. +3. Once the agent jar file is uploaded, go to App Service configurations. If you + need to use **Startup Command** for Linux, please include jvm arguments: -4. Restart the app, leaving the **Startup Command** field blank, to apply the changes. + :::image type="content"source="./media/azure-web-apps/startup-command.png" alt-text="Screenshot of startup command."::: + + **Startup Command** won't honor `JAVA_OPTS`. ++ If you don't use **Startup Command**, create a new environment variable, `JAVA_OPTS`, with the value + `-javaagent:{PATH_TO_THE_AGENT_JAR}/applicationinsights-agent-{VERSION_NUMBER}.jar`. ++4. Restart the app to apply the changes. > [!NOTE] > If you set the JAVA_OPTS environment variable, you will have to disable Application Insights in the portal. Alternatively, if you prefer to enable Application Insights from the portal, make sure that you don't set the `JAVA_OPTS` variable in App Service configurations settings. |
azure-monitor | Data Model Event Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-event-telemetry.md | Title: Azure Application Insights Telemetry Data Model - Event Telemetry | Microsoft Docs -description: Application Insights data model for event telemetry + Title: Application Insights telemetry data model - Event telemetry | Microsoft Docs +description: Learn about the Application Insights data model for event telemetry. Last updated 04/25/2017 -You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically it is a user interaction such as button click or order checkout. It can also be an application life cycle event like initialization or configuration update. +You can create event telemetry items (in [Application Insights](./app-insights-overview.md)) to represent an event that occurred in your application. Typically, it's a user interaction such as a button click or order checkout. It can also be an application lifecycle event like initialization or a configuration update. -Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be a subject to separate, less aggressive [sampling](./api-filtering-sampling.md). +Semantically, events may or may not be correlated to requests. However, if used properly, event telemetry is more important than requests or traces. Events represent business telemetry and should be subject to separate, less aggressive [sampling](./api-filtering-sampling.md). ## Name -Event name. To allow proper grouping and useful metrics, restrict your application so that it generates a small number of separate event names. For example, don't use a separate name for each generated instance of an event. +Event name: To allow proper grouping and useful metrics, restrict your application so that it generates a few separate event names. For example, don't use a separate name for each generated instance of an event. -Max length: 512 characters +**Maximum length:** 512 characters ## Custom properties Max length: 512 characters ## Next steps -- See [data model](data-model.md) for Application Insights types and data model.-- [Write custom event telemetry](./api-custom-events-metrics.md#trackevent)-- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.-+- See [Data model](data-model.md) for Application Insights types and data models. +- [Write custom event telemetry](./api-custom-events-metrics.md#trackevent). +- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights. |
azure-monitor | Java Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md | Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 02/14/2023 Last updated : 02/22/2023 ms.devlang: java There are two options for enabling Application Insights Java with Spring Boot: J ## Enabling with JVM argument -Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.9.jar"` somewhere before `-jar`, for example: +Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.10.jar"` somewhere before `-jar`, for example: ```-java -javaagent:"path/to/applicationinsights-agent-3.4.9.jar" -jar <myapp.jar> +java -javaagent:"path/to/applicationinsights-agent-3.4.10.jar" -jar <myapp.jar> ``` ### Spring Boot via Docker entry point -If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.9.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example: +If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.10.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example: ```-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.9.jar", "-jar", "<myapp.jar>"] +ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.10.jar", "-jar", "<myapp.jar>"] ``` -If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.9.jar"` somewhere before `-jar`, for example: +If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.10.jar"` somewhere before `-jar`, for example: ```-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.9.jar" -jar <myapp.jar> +ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.10.jar" -jar <myapp.jar> ``` ### Configuration To enable Application Insights Java programmatically, you must add the following <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>- <version>3.4.9</version> + <version>3.4.10</version> </dependency> ``` |
azure-monitor | Java Standalone Arguments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md | Title: Add the JVM arg - Application Insights for Java description: Learn how to add the JVM arg that enables Application Insights for Java. Previously updated : 02/14/2023 Last updated : 02/22/2023 ms.devlang: java If you're using a third-party container image that you can't modify, mount the A If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.9.jar" +JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.10.jar" ``` ### Tomcat installed via download and unzip JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.9.jar" If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.9.jar" +CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.10.jar" ``` -If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to `CATALINA_OPTS`. +If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to `CATALINA_OPTS`. ## Tomcat 8 (Windows) If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `- Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.9.jar +set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.10.jar ``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.9.jar" +set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.10.jar" ``` -If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to `CATALINA_OPTS`. +If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to `CATALINA_OPTS`. ### Run Tomcat as a Windows service -Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the `Java Options` under the `Java` tab. +Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to the `Java Options` under the `Java` tab. ## JBoss EAP 7 ### Standalone server -Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows): +Add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows): ```java ...- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.9.jar -Xms1303m -Xmx1303m ..." + JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.10.jar -Xms1303m -Xmx1303m ..." ... ``` ### Domain server -Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`: +Add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`: ```xml ... Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jv <jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->- <option value="-javaagent:path/to/applicationinsights-agent-3.4.9.jar"/> + <option value="-javaagent:path/to/applicationinsights-agent-3.4.10.jar"/> <option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options> Add these lines to `start.ini`: ``` --exec--javaagent:path/to/applicationinsights-agent-3.4.9.jar+-javaagent:path/to/applicationinsights-agent-3.4.10.jar ``` ## Payara 5 -Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`: +Add `-javaagent:path/to/applicationinsights-agent-3.4.10.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`: ```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>- -javaagent:path/to/applicationinsights-agent-3.4.9.jar> + -javaagent:path/to/applicationinsights-agent-3.4.10.jar> </jvm-options> ... </java-config> Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jv 1. In `Generic JVM arguments`, add the following JVM argument: ```- -javaagent:path/to/applicationinsights-agent-3.4.9.jar + -javaagent:path/to/applicationinsights-agent-3.4.10.jar ``` 1. Save and restart the application server. Add `-javaagent:path/to/applicationinsights-agent-3.4.9.jar` to the existing `jv Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.9.jar+-javaagent:path/to/applicationinsights-agent-3.4.10.jar ``` ## Others |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 02/14/2023 Last updated : 02/22/2023 ms.devlang: java You'll find more information and configuration options in the following sections ## Configuration file path -By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.9.jar`. +By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.10.jar`. You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property -If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.9.jar` is located. +If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.10.jar` is located. Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`. Or you can set the connection string by using the Java system property `applicat You can also set the connection string by specifying a file to load the connection string from. -If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.9.jar` is located. +If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.10.jar` is located. ```json { and add `applicationinsights-core` to your application: <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>- <version>3.4.9</version> + <version>3.4.10</version> </dependency> ``` In the preceding configuration example: * `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where-`applicationinsights-agent-3.4.9.jar` is located. +`applicationinsights-agent-3.4.10.jar` is located. Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration. |
azure-monitor | Java Standalone Profiler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md | The ApplicationInsights Java Agent monitors CPU and memory consumption and if it Within the profiler user interface (see [profiler settings](../profiler/profiler-settings.md)) there's a **Profile now** button. Selecting this button will immediately request a profile in all agents that are attached to the Application Insights instance. +> [!WARNING] +> Invoking Profile now will enable the profiler feature, and Application Insights will apply default CPU and memory SLA triggers. When your application breaches those SLAs, Application Insights will gather Java profiles. If you wish to disable profiling later on, you can do so within the trigger menu shown in [Installation](#installation). + #### CPU CPU threshold is a percentage of the usage of all available cores on the system. The following steps will guide you through enabling the profiling component on t 3. Configure the required CPU and Memory thresholds and select Apply. :::image type="content" source="./media/java-standalone-profiler/cpu-memory-trigger-settings.png" alt-text="Screenshot of trigger settings pane for CPU and Memory triggers.":::- -1. Inside the `applicationinsights.json` configuration of your process, enable profiler with the `preview.profiler.enabled` setting: - ```json - { - "connectionString" : "...", - "preview" : { - "profiler" : { - "enabled" : true - } - } - } - ``` - Alternatively, set the `APPLICATIONINSIGHTS_PREVIEW_PROFILER_ENABLED` environment variable to true. - -1. Restart your process with the updated configuration. > [!WARNING] > The Java profiler does not support the "Sampling" trigger. Configuring this will have no effect. Profiles can be generated/edited in the JDK Mission Control (JMC) user interface ### Environment variables -- `APPLICATIONINSIGHTS_PREVIEW_PROFILER_ENABLED`: boolean (default: `false`)- Enables/disables the profiling feature. +- `APPLICATIONINSIGHTS_PREVIEW_PROFILER_ENABLED`: boolean (default: `true`) + Enables/disables the profiling feature. By default the feature is enabled within the agent (since agent 3.4.9). However, even though this feature is enabled within the agent, profiles will not be gathered unless enabled within the Portal as described in [Installation](#installation). ### Configuration file Azure Monitor Application Insights Java profiler uses Java Flight Recorder (JFR) Java Flight Recorder is a tool for collecting profiling data of a running Java application. It's integrated into the Java Virtual Machine (JVM) and is used for troubleshooting performance issues. Learn more about [Java SE JFR Runtime](https://docs.oracle.com/javacomponents/jmc-5-4/jfr-runtime-guide/about.htm#JFRUH170). ### What is the price and/or licensing fee implications for enabling App Insights Java Profiling?-Java Profiling enablement is a free feature with Application Insights. [Azure Monitor Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/) is based on ingestion cost. +Java Profiling is a free feature with Application Insights. [Azure Monitor Application Insights pricing](https://azure.microsoft.com/pricing/details/monitor/) is based on ingestion cost. ### Which Java profiling information is collected? Profiling data collected by the JFR includes: method and execution profiling data, garbage collection data, and lock profiles. Review the [Pre-requisites](#prerequisites) at the top of this article. ### Can I use Java Profiling for microservices application? -Yes, you can profile a JVM running microservices using the JFR. +Yes, you can profile a JVM running microservices using the JFR. |
azure-monitor | Java Standalone Upgrade From 2X | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md | Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 01/18/2023 Last updated : 02/22/2023 ms.devlang: java auto-instrumentation which is provided by the 3.x Java agent. Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.9.jar+-javaagent:path/to/applicationinsights-agent-3.4.10.jar ``` If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above. |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | Title: Enable Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides guidance on how to enable Azure Monitor on applications by using OpenTelemetry. Previously updated : 01/10/2023 Last updated : 02/22/2023 ms.devlang: csharp, javascript, typescript, python dotnet add package --prerelease Azure.Monitor.OpenTelemetry.Exporter -s https:// #### [Java](#tab/java) -Download the [applicationinsights-agent-3.4.8.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.8/applicationinsights-agent-3.4.8.jar) file. +Download the [applicationinsights-agent-3.4.10.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.10/applicationinsights-agent-3.4.10.jar) file. > [!WARNING] > public class Program Java auto-instrumentation is enabled through configuration changes; no code changes are required. -Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.8.jar"` to your application's JVM args. +Point the JVM to the jar file by adding `-javaagent:"path/to/applicationinsights-agent-3.4.10.jar"` to your application's JVM args. > [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md). Use one of the following two ways to point the jar file to your Application Insi APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String> ``` -- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.8.jar` with the following content:+- Create a configuration file named `applicationinsights.json`, and place it in the same directory as `applicationinsights-agent-3.4.10.jar` with the following content: ```json { This is not available in .NET. <dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>- <version>3.4.8</version> + <version>3.4.10</version> </dependency> ``` |
azure-monitor | Sharepoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sharepoint.md | Title: Monitor a SharePoint site with Application Insights -description: Start monitoring a new application with a new instrumentation key +description: Start monitoring a new application with a new instrumentation key. Last updated 09/08/2020 -Azure Application Insights monitors the availability, performance and usage of your apps. Here you'll learn how to set it up for a SharePoint site. +Application Insights monitors the availability, performance, and usage of your apps. This article shows you how to set it up for a SharePoint site. > [!NOTE]-> Due to security concerns, you can't directly add the script that's described in this article to your webpages in the SharePoint modern UX. As an alternative, you can use [SharePoint Framework (SPFx)](/sharepoint/dev/spfx/extensions/overview-extensions) to build a custom extension that you can use to install Application Insights on your SharePoint sites. +> Because of security concerns, you can't directly add the script that's described in this article to your webpages in the SharePoint modern UX. As an alternative, you can use [SharePoint Framework (SPFx)](/sharepoint/dev/spfx/extensions/overview-extensions) to build a custom extension that you can use to install Application Insights on your SharePoint sites. ## Create an Application Insights resource-In the [Azure portal](https://portal.azure.com), create a new Application Insights resource. Choose ASP.NET as the application type. +In the [Azure portal](https://portal.azure.com), create a new Application Insights resource. For **Application Type**, select **ASP.NET**. - + -The window that opens is the place where you'll see performance and usage data about your app. To get back to it next time you sign in to Azure, you should find a tile for it on the start screen. Alternatively select Browse to find it. +The window that opens is the place where you see performance and usage data about your app. The next time you sign in to Azure, a tile for it appears on the **Start** screen. Alternatively, select **Browse** to find it. -## Add the script to your web pages +## Add the script to your webpages -The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318). +The following current snippet is version `"5"`. The version is encoded in the snippet as `sv:"#"`. The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318). ```HTML <!-- cfg: { // Application Insights Configuration ``` > [!NOTE]-> The Url for SharePoint uses a different module format "...\ai.2.gbl.min.js" (note the additional **.gbl.**) this alternate module format is required to avoid an issue caused by the order that scripts are loaded, which will cause the SDK to fail to initialize and will result in the loss of telemetry events. +> The URL for SharePoint uses a different module format `"...\ai.2.gbl.min.js"` (note the extra `.gbl`.). This alternate module format is required to avoid an issue caused by the order in which scripts are loaded. The issue causes the SDK to fail to initialize and results in the loss of telemetry events. >-> The issue is caused by requireJS being loaded and initialized before the SDK. +> The issue is caused by `requireJS` being loaded and initialized before the SDK. -Insert the script just before the </head> tag of every page you want to track. If your website has a master page, you can put the script there. For example, in an ASP.NET MVC project, you'd put it in View\Shared\_Layout.cshtml +Insert the script before the </head> tag of every page you want to track. If your website has a main page, you can put the script there. For example, in an ASP.NET MVC project, you'd put it in `View\Shared\_Layout.cshtml`. The script contains the instrumentation key that directs the telemetry to your Application Insights resource. ### Add the code to your site pages-#### On the master page -If you can edit the site's master page, that will provide monitoring for every page in the site. -Check out the master page and edit it using SharePoint Designer or any other editor. +You can add the code to your main page or individual pages. - +#### Main page +If you can edit the site's main page, you can provide monitoring for every page in the site. -Add the code just before the </head> tag. +Check out the main page and edit it by using SharePoint Designer or any other editor. ++ ++Add the code before the </head> tag.  [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] -#### Or on individual pages -To monitor a limited set of pages, add the script separately to each page. +#### Individual pages +To monitor a limited set of pages, add the script separately to each page. Insert a web part and embed the code snippet in it. Redeploy your app. Return to your application pane in the [Azure portal](https://portal.azure.com). -The first events appear in Search. +The first events appear in **Search**.  -Select Refresh after a few seconds if you're expecting more data. +Select **Refresh** after a few seconds if you're expecting more data. -## Capturing User Id -The standard web page code snippet doesn't capture the user ID from SharePoint, but you can do that with a small modification. +## Capture the user ID +The standard webpage code snippet doesn't capture the user ID from SharePoint, but you can do that with a small modification. -1. Copy your app's instrumentation key from the Essentials drop-down in Application Insights. +1. Copy your app's instrumentation key from the **Essentials** dropdown in Application Insights.  -1. Substitute the instrumentation key for 'XXXX' in the snippet below. -2. Embed the script in your SharePoint app instead of the snippet you get from the portal. --``` ---<SharePoint:ScriptLink ID="ScriptLink1" name="SP.js" runat="server" localizable="false" loadafterui="true" /> -<SharePoint:ScriptLink ID="ScriptLink2" name="SP.UserProfiles.js" runat="server" localizable="false" loadafterui="true" /> --<script type="text/javascript"> -var personProperties; --// Ensure that the SP.UserProfiles.js file is loaded before the custom code runs. -SP.SOD.executeOrDelayUntilScriptLoaded(getUserProperties, 'SP.UserProfiles.js'); --function getUserProperties() { - // Get the current client context and PeopleManager instance. - var clientContext = new SP.ClientContext.get_current(); - var peopleManager = new SP.UserProfiles.PeopleManager(clientContext); -- // Get user properties for the target user. - // To get the PersonProperties object for the current user, use the - // getMyProperties method. -- personProperties = peopleManager.getMyProperties(); -- // Load the PersonProperties object and send the request. - clientContext.load(personProperties); - clientContext.executeQueryAsync(onRequestSuccess, onRequestFail); -} --// This function runs if the executeQueryAsync call succeeds. -function onRequestSuccess() { -var appInsights=window.appInsights||function(config){ -function s(config){t[config]=function(){var i=arguments;t.queue.push(function(){t[config].apply(t,i)})}}var t={config:config},r=document,f=window,e="script",o=r.createElement(e),i,u;for(o.src=config.url||"//az416426.vo.msecnd.net/scripts/a/ai.0.js",r.getElementsByTagName(e)[0].parentNode.appendChild(o),t.cookie=r.cookie,t.queue=[],i=["Event","Exception","Metric","PageView","Trace"];i.length;)s("track"+i.pop());return config.disableExceptionTracking||(i="onerror",s("_"+i),u=f[i],f[i]=function(config,r,f,e,o){var s=u&&u(config,r,f,e,o);return s!==!0&&t["_"+i](config,r,f,e,o),s}),t - }({ - instrumentationKey:"XXXX" - }); - window.appInsights=appInsights; - appInsights.trackPageView(document.title,window.location.href, {User: personProperties.get_displayName()}); -} --// This function runs if the executeQueryAsync call fails. -function onRequestFail(sender, args) { -} -</script> ---``` ----## Next Steps -* [Availability overview](./availability-overview.md) to monitor the availability of your site. -* [Application Insights](./app-insights-overview.md) for other types of app. +1. Substitute the instrumentation key for `XXXX` in the following snippet. +1. Embed the script in your SharePoint app instead of the snippet you get from the portal. ++ ``` + + + <SharePoint:ScriptLink ID="ScriptLink1" name="SP.js" runat="server" localizable="false" loadafterui="true" /> + <SharePoint:ScriptLink ID="ScriptLink2" name="SP.UserProfiles.js" runat="server" localizable="false" loadafterui="true" /> + + <script type="text/javascript"> + var personProperties; + + // Ensure that the SP.UserProfiles.js file is loaded before the custom code runs. + SP.SOD.executeOrDelayUntilScriptLoaded(getUserProperties, 'SP.UserProfiles.js'); + + function getUserProperties() { + // Get the current client context and PeopleManager instance. + var clientContext = new SP.ClientContext.get_current(); + var peopleManager = new SP.UserProfiles.PeopleManager(clientContext); + + // Get user properties for the target user. + // To get the PersonProperties object for the current user, use the + // getMyProperties method. + + personProperties = peopleManager.getMyProperties(); + + // Load the PersonProperties object and send the request. + clientContext.load(personProperties); + clientContext.executeQueryAsync(onRequestSuccess, onRequestFail); + } + + // This function runs if the executeQueryAsync call succeeds. + function onRequestSuccess() { + var appInsights=window.appInsights||function(config){ + function s(config){t[config]=function(){var i=arguments;t.queue.push(function(){t[config].apply(t,i)})}}var t={config:config},r=document,f=window,e="script",o=r.createElement(e),i,u;for(o.src=config.url||"//az416426.vo.msecnd.net/scripts/a/ai.0.js",r.getElementsByTagName(e)[0].parentNode.appendChild(o),t.cookie=r.cookie,t.queue=[],i=["Event","Exception","Metric","PageView","Trace"];i.length;)s("track"+i.pop());return config.disableExceptionTracking||(i="onerror",s("_"+i),u=f[i],f[i]=function(config,r,f,e,o){var s=u&&u(config,r,f,e,o);return s!==!0&&t["_"+i](config,r,f,e,o),s}),t + }({ + instrumentationKey:"XXXX" + }); + window.appInsights=appInsights; + appInsights.trackPageView(document.title,window.location.href, {User: personProperties.get_displayName()}); + } + + // This function runs if the executeQueryAsync call fails. + function onRequestFail(sender, args) { + } + </script> + + + ``` ++## Next steps +* See the [Availability overview](./availability-overview.md) to monitor the availability of your site. +* See [Application Insights](./app-insights-overview.md) for other types of apps. <!--Link references--> |
azure-monitor | Source Map Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/source-map-support.md | Title: Source map support for JavaScript applications - Azure Monitor Application Insights -description: Learn how to upload source maps to your own storage account Blob container using Application Insights. +description: Learn how to upload source maps to your Azure Storage account blob container by using Application Insights. Last updated 06/23/2020 -Application Insights supports the uploading of source maps to your own Storage Account Blob Container. -Source maps can be used to unminify call stacks found on the end to end transaction details page. Any exception sent by the [JavaScript SDK][ApplicationInsights-JS] or the [Node.js SDK][ApplicationInsights-Node.js] can be unminified with source maps. +Application Insights supports the uploading of source maps to your Azure Storage account blob container. You can use source maps to unminify call stacks found on the **End-to-end transaction details** page. You can also use source maps to unminify any exception sent by the [JavaScript SDK][ApplicationInsights-JS] or the [Node.js SDK][ApplicationInsights-Node.js]. - + -## Create a new storage account and Blob container +## Create a new storage account and blob container If you already have an existing storage account or blob container, you can skip this step. -1. [Create a new storage account][create storage account] -2. [Create a blob container][create blob container] inside your storage account. Be sure to set the "Public access level" to `Private`, to ensure that your source maps are not publicly accessible. +1. [Create a new storage account][create storage account]. +1. [Create a blob container][create blob container] inside your storage account. Set **Public access level** to **Private** to ensure that your source maps aren't publicly accessible. -> [!div class="mx-imgBorder"] -> + > [!div class="mx-imgBorder"] + > -## Push your source maps to your Blob container +## Push your source maps to your blob container -You should integrate your continuous deployment pipeline with your storage account by configuring it to automatically upload your source maps to the configured Blob container. +Integrate your continuous deployment pipeline with your storage account by configuring it to automatically upload your source maps to the configured blob container. -Source maps can be uploaded to your Blob Storage Container with the same folder structure they were compiled & deployed with. A common use case is to prefix a deployment folder with its version, e.g. `1.2.3/static/js/main.js`. When unminifying via an Azure Blob container called `sourcemaps`, it will try to fetch a source map located at `sourcemaps/1.2.3/static/js/main.js.map`. +You can upload source maps to your Azure Blob Storage container with the same folder structure they were compiled and deployed with. A common use case is to prefix a deployment folder with its version, for example, `1.2.3/static/js/main.js`. When you unminify via an Azure blob container called `sourcemaps`, the pipeline tries to fetch a source map located at `sourcemaps/1.2.3/static/js/main.js.map`. ### Upload source maps via Azure Pipelines (recommended) -If you are using Azure Pipelines to continuously build and deploy your application, add an [Azure File Copy][azure file copy] task to your pipeline to automatically upload your source maps. +If you're using Azure Pipelines to continuously build and deploy your application, add an [Azure file copy][azure file copy] task to your pipeline to automatically upload your source maps. > [!div class="mx-imgBorder"]->  +>  ++## Configure your Application Insights resource with a source map storage account -## Configure your Application Insights resource with a Source Map storage account +You have two options for configuring your Application Insights resource with a source map storage account. -### From the end-to-end transaction details page +### End-to-end transaction details tab -From the end-to-end transaction details tab, you can click on *Unminify* and it will display a prompt to configure if your resource is unconfigured. +From the **End-to-end transaction details** tab, select **Unminify**. Configure your resource if it's unconfigured. -1. In the Portal, view the details of an exception that is minified. -2. Select *Unminify*. -3. If your resource has not been configured, a message will appear, prompting you to configure. +1. In the Azure portal, view the details of an exception that's minified. +1. Select **Unminify**. +1. If your resource isn't configured, configure it. -### From the properties page +### Properties tab -If you would like to configure or change the storage account or Blob container that is linked to your Application Insights Resource, you can do it by viewing the Application Insights resource's *Properties* tab. +To configure or change the storage account or blob container that's linked to your Application Insights resource: -1. Navigate to the *Properties* tab of your Application Insights resource. -2. Select *Change source map blob container*. -3. Select a different Blob container as your source maps container. -4. Select `Apply`. +1. Go to the **Properties** tab of your Application Insights resource. +1. Select **Change source map Blob Container**. +1. Select a different blob container as your source map container. +1. Select **Apply**. > [!div class="mx-imgBorder"]->  +>  ## Troubleshooting -### Required Azure role-based access control (Azure RBAC) settings on your Blob container +This section offers troubleshooting tips for common issues. -Any user on the Portal using this feature must be at least assigned as a [Storage Blob Data Reader][storage blob data reader] to your Blob container. You must assign this role to anyone else that will be using the source maps through this feature. +### Required Azure role-based access control settings on your blob container ++Any user on the portal who uses this feature must be assigned at least as a [Storage Blob Data Reader][storage blob data reader] to your blob container. Assign this role to anyone who might use the source maps through this feature. > [!NOTE]-> Depending on how the container was created, this may not have been automatically assigned to you or your team. +> Depending on how the container was created, this role might not have been automatically assigned to you or your team. ### Source map not found -1. Verify that the corresponding source map is uploaded to the correct blob container -2. Verify that the source map file is named after the JavaScript file it maps to, suffixed with `.map`. - - For example, `/static/js/main.4e2ca5fa.chunk.js` will search for the blob named `main.4e2ca5fa.chunk.js.map` -3. Check your browser's console to see if any errors are being logged. Include this in any support ticket. --## Next Steps +1. Verify that the corresponding source map is uploaded to the correct blob container. +1. Verify that the source map file is named after the JavaScript file it maps to and uses the suffix `.map`. + + For example, `/static/js/main.4e2ca5fa.chunk.js` searches for the blob named `main.4e2ca5fa.chunk.js.map`. +1. Check your browser's console to see if any errors were logged. Include this information in any support ticket. -* [Azure File Copy task](/azure/devops/pipelines/tasks/deploy/azure-file-copy) +## Next steps +[Azure file copy task](/azure/devops/pipelines/tasks/deploy/azure-file-copy) <!-- Remote URLs --> [create storage account]: ../../storage/common/storage-account-create.md?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&tabs=azure-portal |
azure-monitor | Usage Cohorts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-cohorts.md | Title: Application Insights usage cohorts | Microsoft Docs -description: Analyze different sets or users, sessions, events, or operations that have something in common +description: Analyze different sets or users, sessions, events, or operations that have something in common. Last updated 07/30/2021 # Application Insights cohorts -A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set youΓÇÖre interested in. +A cohort is a set of users, sessions, events, or operations that have something in common. In Application Insights, cohorts are defined by an analytics query. In cases where you have to analyze a specific set of users or events repeatedly, cohorts can give you more flexibility to express exactly the set you're interested in. -## Cohorts versus basic filters +## Cohorts vs. basic filters -Cohorts are used in ways similar to filters. But cohorts' definitions are built from custom analytics queries, so they're much more adaptable and complex. Unlike filters, you can save cohorts so other members of your team can reuse them. +You can use cohorts in ways similar to filters. But cohorts' definitions are built from custom analytics queries, so they're much more adaptable and complex. Unlike filters, you can save cohorts so that other members of your team can reuse them. You might define a cohort of users who have all tried a new feature in your app. You can save this cohort in your Application Insights resource. It's easy to analyze this saved group of specific users in the future.- > [!NOTE]-> After they're created, cohorts are available from the Users, Sessions, Events, and User Flows tools. +> After cohorts are created, they're available from the Users, Sessions, Events, and User Flows tools. ## Example: Engaged users Your team defines an engaged user as anyone who uses your app five or more times in a given month. In this section, you define a cohort of these engaged users. -1. Select **Create a Cohort** --2. Select the **Template Gallery** tab. You see a collection of templates for various cohorts. --3. Select **Engaged Users -- by Days Used**. +1. Select **Create a Cohort**. +1. Select the **Template Gallery** tab to see a collection of templates for various cohorts. +1. Select **Engaged Users -- by Days Used**. There are three parameters for this cohort:- * **Activities**, where you choose which events and page views count as ΓÇ£usage.ΓÇ¥ - * **Period**, the definition of a month. - * **UsedAtLeastCustom**, the number of times users need to use something within a period to count as engaged. + * **Activities**: Where you choose which events and page views count as usage. + * **Period**: The definition of a month. + * **UsedAtLeastCustom**: The number of times users need to use something within a period to count as engaged. -4. Change **UsedAtLeastCustom** to **5+ days**, and leave **Period** on the default of 28 days. +1. Change **UsedAtLeastCustom** to **5+ days**. Leave **Period** set as the default of 28 days. - - Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28. + Now this cohort represents all user IDs sent with any custom event or page view on 5 separate days in the past 28 days. -5. Select **Save**. +1. Select **Save**. > [!TIP]- > Give your cohort a name, like ΓÇ£Engaged Users (5+ Days).ΓÇ¥ Save it to ΓÇ£My reportsΓÇ¥ or ΓÇ£Shared reports,ΓÇ¥ depending on whether you want other people who have access to this Application Insights resource to see this cohort. + > Give your cohort a name, like *Engaged Users (5+ Days)*. Save it to *My reports* or *Shared reports*, depending on whether you want other people who have access to this Application Insights resource to see this cohort. -6. Select **Back to Gallery**. +1. Select **Back to Gallery**. ### What can you do by using this cohort? -Open the Users tool. In the **Show** drop-down box, choose the cohort you created under **Users who belong to**. +Open the Users tool. In the **Show** dropdown box, choose the cohort you created under **Users who belong to**. --A few important things to notice: +Important points to notice: * You can't create this set through normal filters. The date logic is more advanced.-* You can further filter this cohort by using the normal filters in the Users tool. So although the cohort is defined on 28-day windows, you can still adjust the time range in the Users tool to be 30, 60, or 90 days. +* You can further filter this cohort by using the normal filters in the Users tool. Although the cohort is defined on 28-day windows, you can still adjust the time range in the Users tool to be 30, 60, or 90 days. These filters support more sophisticated questions that are impossible to express through the query builder. An example is _people who were engaged in the past 28 days. How did those same people behave over the past 60 days?_ ## Example: Events cohort -You can also make cohorts of events. In this section, you define a cohort of the events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers _active usage_ or a set related to a certain new feature. --1. Select **Create a Cohort** +You can also make cohorts of events. In this section, you define a cohort of events and page views. Then you see how to use them from the other tools. This cohort might define a set of events that your team considers _active usage_ or a set related to a certain new feature. -2. Select the **Template Gallery** tab. YouΓÇÖll see a collection of templates for various cohorts. --3. Select **Events Picker**. --4. In the **Activities** drop-down box, select the events you want to be in the cohort. --5. Save the cohort and give it a name. +1. Select **Create a Cohort**. +1. Select the **Template Gallery** tab to see a collection of templates for various cohorts. +1. Select **Events Picker**. +1. In the **Activities** dropdown box, select the events you want to be in the cohort. +1. Save the cohort and give it a name. ## Example: Active users where you modify a query -The previous two cohorts were defined by using drop-down boxes. But you can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom. -+The previous two cohorts were defined by using dropdown boxes. You can also define cohorts by using analytics queries for total flexibility. To see how, create a cohort of users from the United Kingdom. 1. Open the Cohorts tool, select the **Template Gallery** tab, and select **Blank Users cohort**. - :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot of the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png"::: + :::image type="content" source="./media/usage-cohorts/cohort.png" alt-text="Screenshot that shows the template gallery for cohorts." lightbox="./media/usage-cohorts/cohort.png"::: There are three sections:- * A Markdown text section, where you describe the cohort in more detail for others on your team. -- * A parameters section, where you make your own parameters, like **Activities** and other drop-down boxes from the previous two examples. - * A query section, where you define the cohort by using an analytics query. + * **Markdown text**: Where you describe the cohort in more detail for other members on your team. + * **Parameters**: Where you make your own parameters, like **Activities**, and other dropdown boxes from the previous two examples. + * **Query**: Where you define the cohort by using an analytics query. - In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a ΓÇ£| summarize by user_IdΓÇ¥ clause to the query. This data is previewed below the query in a table, so you can make sure your query is returning results. + In the query section, you [write an analytics query](/azure/kusto/query). The query selects the certain set of rows that describe the cohort you want to define. The Cohorts tool then implicitly adds a `| summarize by user_Id` clause to the query. This data appears as a preview underneath the query in a table, so you can make sure your query is returning results. > [!NOTE]- > If you donΓÇÖt see the query, try resizing the section to make it taller and reveal the query. + > If you don't see the query, resize the section to make it taller and reveal the query. -2. Copy and paste the following text into the query editor: +1. Copy and paste the following text into the query editor: ```KQL union customEvents, pageViews | where client_CountryOrRegion == "United Kingdom" ``` -3. Select **Run Query**. If you don't see user IDs appear in the table, change to a country/region where your application has users. +1. Select **Run Query**. If you don't see user IDs appear in the table, change to a country/region where your application has users. -4. Save and name the cohort. +1. Save and name the cohort. -## Frequently asked questions +## Frequently asked question -_IΓÇÖve defined a cohort of users from a certain country/region. When I compare this cohort in the Users tool to just setting a filter on that country/region, I see different results. Why?_ +### I defined a cohort of users from a certain country/region. When I compare this cohort in the Users tool to setting a filter on that country/region, why do I see different results? -Cohorts and filters are different. Suppose you have a cohort of users from the United Kingdom (defined like the previous example), and you compare its results to setting the filter ΓÇ£Country or region = United Kingdom.ΓÇ¥ +Cohorts and filters are different. Suppose you have a cohort of users from the United Kingdom (defined like the previous example), and you compare its results to setting the filter `Country or region = United Kingdom`: * The cohort version shows all events from users who sent one or more events from the United Kingdom in the current time range. If you split by country or region, you likely see many countries and regions.-* The filters version only shows events from the United Kingdom. But if you split by country or region, you see only the United Kingdom. +* The filters version only shows events from the United Kingdom. If you split by country or region, you see only the United Kingdom. ## Learn more |
azure-monitor | Metrics Supported | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md | Host OS metrics *are* available and listed in the tables. Host OS metrics relate > [!TIP] > A best practice is to use and configure the Azure Monitor agent to send guest OS performance metrics into the same Azure Monitor metric database where platform metrics are stored. The agent routes guest OS metrics through the [custom metrics](../essentials/metrics-custom-overview.md) API. You can then chart, alert, and otherwise use guest OS metrics like platform metrics. >-> Alternatively or in addition, you can send the guest OS metrics to Azure Monitor Logs by using the same agent. There you can query on those metrics in combination with non-metric data by using Log Analytics. +> Alternatively or in addition, you can send the guest OS metrics to Azure Monitor Logs by using the same agent. There you can query on those metrics in combination with non-metric data by using Log Analytics. Standard [Log Analytics workspace costs](https://azure.microsoft.com/pricing/details/monitor/) would then apply. The Azure Monitor agent replaces the Azure Diagnostics extension and Log Analytics agent, which were previously used for guest OS routing. For important additional information, see [Overview of Azure Monitor agents](../agents/agents-overview.md). This latest update adds a new column and reorders the metrics to be alphabetical - [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md) -<!--Gen Date: Wed Feb 01 2023 09:43:49 GMT+0200 (Israel Standard Time)--> +<!--Gen Date: Wed Feb 01 2023 09:43:49 GMT+0200 (Israel Standard Time)--> |
azure-monitor | Prometheus Metrics Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-enable.md | Use `az aks update` with the `-enable-azuremonitormetrics` option to install the **Create a new default Azure Monitor workspace.**<br> If no Azure Monitor Workspace is specified, then a default Azure Monitor Workspace will be created in the `DefaultRG-<cluster_region>` following the format `DefaultAzureMonitorWorkspace-<mapped_region>`.-This Azure Monitor Workspace will be in the region specific in [Region mappings](#region-mappings). +This Azure Monitor Workspace is in the region specific in [Region mappings](#region-mappings). ```azurecli az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> ``` **Use an existing Azure Monitor workspace.**<br>-If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data will be available in Grafana. +If the Azure Monitor workspace is linked to one or more Grafana workspaces, then the data is available in Grafana. ```azurecli az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <workspace-name-resource-id> This creates a link between the Azure Monitor workspace and the Grafana workspac az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id> ``` -The output for each command will look similar to the following: +The output for each command looks similar to the following: ```json "azureMonitorProfile": { The output for each command will look similar to the following: #### Optional parameters Following are optional parameters that you can use with the previous commands. -- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications.-- `--ksm-metric-labels-allow-list` is a comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include additional labels provide a list of resource names in their plural form and Kubernetes label keys you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications.+- `--ksm-metric-annotations-allow-list` is a comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more annotations provide a list of resource names in their plural form and Kubernetes annotation keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any annotations, but that has severe performance implications. +- `--ksm-metric-labels-allow-list` is a comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. By default the metric contains only name and namespace labels. To include more labels provide a list of resource names in their plural form and Kubernetes label keys, you would like to allow for them. A single `*` can be provided per resource instead to allow any labels, but that has severe performance implications. **Use annotations and labels.** Following are optional parameters that you can use with the previous commands. az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --ksm-metric-labels-allow-list "namespaces=[k8s-label-1,k8s-label-n]" --ksm-metric-annotations-allow-list "pods=[k8s-annotation-1,k8s-annotation-n]" ``` -The output will be similar to the following: +The output is similar to the following: ```json "azureMonitorProfile": { The output will be similar to the following: ### Retrieve required values for Grafana resource From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**. -If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace. +If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace, then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace. ```json "properties": { If you're using an existing Azure Managed Grafana instance that already has been | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |- | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. | + | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. | | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. | -4. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will be similar to the following: +4. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This is similar to the following: ```json { Currently in bicep, there is no way to explicitly "scope" the Monitoring Data Re From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**. -If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace. +If you're using an existing Azure Managed Grafana instance that already has been linked to an Azure Monitor workspace, then you need the list of Grafana integrations. Copy the value of the `azureMonitorWorkspaceIntegrations` field. If it doesn't exist, then the instance hasn't been linked with any Azure Monitor workspace. ```json "properties": { If you're using an existing Azure Managed Grafana instance that already has been 2. Download the parameter file from [here](https://aka.ms/azureprometheus-enable-bicep-template-parameters) and save it as **FullAzureMonitorMetricsProfileParameters.json** in the same directory as the main bicep template. 3. Download the [nested_azuremonitormetrics_dcra_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_dcra_clusterResourceId) and [nested_azuremonitormetrics_profile_clusterResourceId.bicep](https://aka.ms/nested_azuremonitormetrics_profile_clusterResourceId) files in the same directory as the main bicep template. 4. Edit the values in the parameter file.-5. The main bicep template creates all the required resources and uses 2 modules for creating the dcra and monitormetrics profile resources from the other two bicep files. +5. The main bicep template creates all the required resources and uses two modules for creating the dcra and monitormetrics profile resources from the other two bicep files. | Parameter | Value | |:|:| If you're using an existing Azure Managed Grafana instance that already has been | `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |- | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. | + | `metricAnnotationsAllowList` | Comma-separated list of more Kubernetes label keys that will be used in the resource's labels metric. | | `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. | -6. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This will be similar to the following: +6. Open the template file and update the `grafanaIntegrations` property at the end of the file with the values that you retrieved from the Grafana instance. This is similar to the following: ```json { In this json, `full_resource_id_1` and `full_resource_id_2` were already in the The final `azureMonitorWorkspaceResourceId` entry is already in the template and is used to link to the Azure Monitor Workspace resource ID provided in the parameters file. +## [Azure Policy](#tab/azurepolicy) ++### Prerequisites ++- Register the `AKS-PrometheusAddonPreview` feature flag in the Azure Kubernetes clusters subscription with the following command in Azure CLI: `az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview`. +- The Azure Monitor workspace and Azure Managed Grafana workspace must already be created. ++### Download Azure policy rules and parameters and deploy ++1. Download the main Azure policy rules template from [here](https://aka.ms/AddonPolicyMetricsProfile) and save it as **AddonPolicyMetricsProfile.rules.json**. +2. Download the parameter file from [here](https://aka.ms/AddonPolicyMetricsProfile.parameters) and save it as **AddonPolicyMetricsProfile.parameters.json** in the same directory as the rules template. +3. Create the policy definition using a command like : `az policy definition create --name "(Preview) Prometheus Metrics addon" --display-name "(Preview) Prometheus Metrics addon" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules .\AddonPolicyMetricsProfile.rules.json --params .\AddonPolicyMetricsProfile.parameters.json` +4. After creating the policy definition, go to Azure portal -> Policy -> Definitions and select the Policy definition you created. +5. Click on 'Assign' and then go to the 'Parameters' tab and fill in the details. Then click 'Review + Create'. +6. Now that the policy is assigned to the subscription, whenever you create a new cluster, which does not have Prometheus enabled, the policy will run and deploy the resources. If you want to apply the policy to existing AKS cluster, create a 'Remediation task' for that AKS cluster resource after going to the 'Policy Assignment'. +7. Now you should see metrics flowing in the existing linked Grafana resource, which is linked with the corresponding Azure Monitor Workspace. ++In case you create a new Managed Grafana resource from Azure portal, please link it with the corresponding Azure Monitor Workspace from the 'Linked Grafana Workspaces' tab of the relevant Azure Monitor Workspace page. Please assign the role 'Monitoring Data Reader' to the Grafana MSI on the Azure Monitor Workspace resource so that it can read data for displaying the charts, using the instructions below. ++1. From the **Overview** page for the Azure Managed Grafana instance in the Azure portal, select **JSON view**. ++2. Copy the value of the `principalId` field for the `SystemAssigned` identity. ++```json +"identity": { + "principalId": "00000000-0000-0000-0000-000000000000", + "tenantId": "00000000-0000-0000-0000-000000000000", + "type": "SystemAssigned" + }, +``` +3. From the **Access control (IAM)** page for the Azure Managed Grafana instance in the Azure portal, select **Add** and then **Add role assignment**. +4. Select `Monitoring Data Reader`. +5. Select **Managed identity** and then **Select members**. +6. Select the **system-assigned managed identity** with the `principalId` from the Grafana resource. +7. Click **Select** and then **Review+assign**. ### Deploy template |
azure-monitor | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/overview.md | Azure Monitor stores data in data stores for each of the pillars of observabilit |Pillar of Observability/<br>Data Store|Description| |||-|[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus based metrics](/articles/azure-monitor/essentials/prometheus-metrics-overview.md).| +|[Azure Monitor Metrics](essentials/data-platform-metrics.md)|Metrics are numerical values that describe an aspect of a system at a particular point in time. [Azure Monitor Metrics](./essentials/data-platform-metrics.md) is a time-series database, optimized for analyzing time-stamped data. Azure Monitor collects metrics at regular intervals. Metrics are identified with a timestamp, a name, a value, and one or more defining labels. They can be aggregated using algorithms, compared to other metrics, and analyzed for trends over time. It supports native Azure Monitor metrics and [Prometheus based metrics](essentials/prometheus-metrics-overview.md).| |[Azure Monitor Logs](logs/data-platform-logs.md)|Logs are recorded system events. Logs can contain different types of data, be structured or free-form text, and they contain a timestamp. Azure Monitor stores structured and unstructured log data of all types in [Azure Monitor Logs](./logs/data-platform-logs.md). You can route data to [Log Analytics workspaces](./logs/log-analytics-overview.md) for querying and analysis.| |Traces|Distributed traces identify the series of related events that follow a user request through a distributed system. A trace measures the operation and performance of your application across the entire set of components in your system. Traces can be used to determine the behavior of application code and the performance of different transactions. Azure Monitor gets distributed trace data from the Application Insights SDK. The trace data is stored in a separate workspace in Azure Monitor Logs.| |Changes|Changes are a series of events in your application and resources. They're tracked and stored when you use the [Change Analysis](./change/change-analysis.md) service, which uses [Azure Resource Graph](../governance/resource-graph/overview.md) as its store. Change Analysis helps you understand which changes, such as deploying updated code, may have caused issues in your systems.| |
azure-monitor | Vminsights Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-maps.md | Last updated 06/08/2022 # Use the Map feature of VM insights to understand application components In VM insights, you can view discovered application components on Windows and Linux virtual machines (VMs) that run in Azure or your environment. You can observe the VMs in two ways. View a map directly from a VM or view a map from Azure Monitor to see the components across groups of VMs. This article will help you understand these two viewing methods and how to use the Map feature. -For information about configuring VM insights, see [Enable VM insights](./vminsights-enable-overview.md). +For information about configuring VM insights, see [Enable VM insights](vminsights-enable-overview.md). ## Prerequisites-To enable the map feature in VM insights, the virtual machine requires one of the following. See [Enable VM insights on unmonitored machine](vminsights-maps.md) for details on each. +To enable the map feature in VM insights, the virtual machine requires one of the following. See [Enable VM insights on unmonitored machine](vminsights-enable-overview.md) for details on each. - Azure Monitor agent with **processes and dependencies** enabled. - Log Analytics agent enabled for VM insights. |
azure-netapp-files | Azure Netapp Files Create Volumes Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md | Before creating an SMB volume, you need to create an Active Directory connection ## Add an SMB volume -1. Click the **Volumes** blade from the Capacity Pools blade. +1. Select the **Volumes** blade from the Capacity Pools blade.  -2. Click **+ Add volume** to create a volume. +2. Select **+ Add volume** to create a volume. The Create a Volume window appears. -3. In the Create a Volume window, click **Create** and provide information for the following fields under the Basics tab: +3. In the Create a Volume window, select **Create** and provide information for the following fields under the Basics tab: * **Volume name** Specify the name for the volume that you are creating. Before creating an SMB volume, you need to create an Active Directory connection The **Available quota** field shows the amount of unused space in the chosen capacity pool that you can use towards creating a new volume. The size of the new volume must not exceed the available quota. + * **Large Volume** + If the quota of your volume is less than 100 TiB, select **No**. If the quota of your volume is greater than 100 TiB, select **Yes**. + [!INCLUDE [Large volumes warning](includes/large-volumes-notice.md)] + * **Throughput (MiB/S)** If the volume is created in a manual QoS capacity pool, specify the throughput you want for the volume. Before creating an SMB volume, you need to create an Active Directory connection Specify the subnet that you want to use for the volume. The subnet you specify must be delegated to Azure NetApp Files. - If you haven't delegated a subnet, you can click **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files. + If you haven't delegated a subnet, you can select **Create new** on the Create a Volume page. Then in the Create Subnet page, specify the subnet information, and select **Microsoft.NetApp/volumes** to delegate the subnet for Azure NetApp Files. In each VNet, only one subnet can be delegated to Azure NetApp Files.  Before creating an SMB volume, you need to create an Active Directory connection * **Availability zone** This option lets you deploy the new volume in the logical availability zone that you specify. Select an availability zone where Azure NetApp Files resources are present. For details, see [Manage availability zone volume placement](manage-availability-zone-volume-placement.md). - * If you want to apply an existing snapshot policy to the volume, click **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. + * If you want to apply an existing snapshot policy to the volume, select **Show advanced section** to expand it, specify whether you want to hide the snapshot path, and select a snapshot policy in the pull-down menu. For information about creating a snapshot policy, see [Manage snapshot policies](snapshots-manage-policy.md).  -4. Click **Protocol** and complete the following information: +4. Select **Protocol** and complete the following information: * Select **SMB** as the protocol type for the volume. * Select your **Active Directory** connection from the drop-down list. Before creating an SMB volume, you need to create an Active Directory connection  -5. Click **Review + Create** to review the volume details. Then click **Create** to create the SMB volume. +5. Select **Review + Create** to review the volume details. Then select **Create** to create the SMB volume. The volume you created appears in the Volumes page. You can modify SMB share permissions using Microsoft Management Console (MMC). ## Next steps * [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)+* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) * [Mount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)-* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) +* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) * [Enable Continuous Availability on existing SMB volumes](enable-continuous-availability-existing-SMB.md) * [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) |
azure-netapp-files | Azure Netapp Files Create Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md | This article shows you how to create an NFS volume. For SMB volumes, see [Create The **Available quota** field shows the amount of unused space in the chosen capacity pool that you can use towards creating a new volume. The size of the new volume must not exceed the available quota. + * **Large Volume** + If the quota of your volume is less than 100 TiB, select **No**. If the quota of your volume is greater than 100 TiB, select **Yes**. + [!INCLUDE [Large volumes warning](includes/large-volumes-notice.md)] + * **Throughput (MiB/S)** If the volume is created in a manual QoS capacity pool, specify the throughput you want for the volume. This article shows you how to create an NFS volume. For SMB volumes, see [Create * [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md) * [Configure access control lists on NFSv4.1 with Azure NetApp Files](configure-access-control-lists.md) * [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)+* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | The following table describes resource limits for Azure NetApp Files: | Number of volumes per subscription | 500 | Yes | | Number of volumes per capacity pool | 500 | Yes | | Number of snapshots per volume | 255 | No |-| Number of IPs in a VNet (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | +| Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | | Minimum size of a single capacity pool | 2 TiB* | No |-| Maximum size of a single capacity pool | 500 TiB | No | -| Minimum size of a single volume | 100 GiB | No | -| Maximum size of a single volume | 100 TiB | No | +| Maximum size of a single capacity pool | 500 TiB | Yes | +| Minimum size of a single regular volume | 100 GiB | No | +| Maximum size of a single regular volume | 100 TiB | No | +| Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 102,401 GiB | No | +| Maximum size of a single large volume | 500 TiB | No | | Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No | | Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No | -| Maximum number of files ([`maxfiles`](#maxfiles)) per volume | 106,255,630 | Yes | +| Maximum number of files [`maxfiles`](#maxfiles) per volume | 106,255,630 | Yes | | Maximum number of export policy rules per volume | 5 | No | +| Maximum number of quota rules per volume | 100 | Yes | | Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No | | Maximum assigned throughput for a manual QoS volume | 4,500 MiB/s | No | | Number of cross-region replication data protection volumes (destination volumes) | 10 | Yes | Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limi The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 21,251,126. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules: +**For volumes up to 100 TiB in size:** + | Volume size (quota) | Automatic readjustment of the `maxfiles` limit | |-|-|-| <= 1 TiB | 21,251,126 | +| <= 1 TiB | 21,251,126 | | > 1 TiB but <= 2 TiB | 42,502,252 | | > 2 TiB but <= 3 TiB | 63,753,378 | | > 3 TiB but <= 4 TiB | 85,004,504 |-| > 4 TiB | 106,255,630 | +| > 4 TiB but <= 100 TiB | 106,255,630 | >[!IMPORTANT] > If your volume has a volume size (quota) of more than 4 TiB and you want to increase the `maxfiles` limit, you must initiate [a support request](#request-limit-increase). You can increase the `maxfiles` limit to 531,278,150 if your volume quota is at >[!IMPORTANT] > Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, if you have crossed the 63,753,378 `maxfiles` limit, the volume quota cannot be reduced below its corresponding index of 2 TiB. +**For [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes):** ++| Volume size (quota) | Automatic readjustment of the `maxfiles` limit | +| - | - | +| > 100 TiB | 2,550,135,120 | + +You can increase the `maxfiles` limit beyond 2,550,135,120 using a support request. For every 2,550,135,120 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 120 TiB. For example, if you increase `maxfiles` limit from 2,550,135,120 to 5,100,270,240 files (or any number in between), you need to increase the volume quota to at least 240 TiB. + +The maximum `maxfiles` value for a 500 TiB volume is 10,625,563,000 files. + You cannot set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens to a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](#request-limit-increase) for the volume. ## Request limit increase You can create an Azure support request to increase the adjustable limits from t ## Next steps - [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md)+- [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) - [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md) - [Regional capacity quota for Azure NetApp Files](regional-capacity-quota.md) - [Request region access for Azure NetApp Files](request-region-access.md) |
azure-netapp-files | Azure Netapp Files Understand Storage Hierarchy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md | Understanding how capacity pools work helps you select the right capacity pool t ### General rules of capacity pools - A capacity pool is measured by its provisioned capacity. - For more information, see [QoS types](#qos_types). + For more information, see [QoS types](#qos_types). - The capacity is provisioned by the fixed SKUs that you purchased (for example, a 4-TiB capacity). - A capacity pool can have only one service level. - Each capacity pool can belong to only one NetApp account. However, you can have multiple capacity pools within a NetApp account. Understanding how capacity pools work helps you select the right capacity pool t ### <a name="qos_types"></a>Quality of Service (QoS) types for capacity pools -The QoS type is an attribute of a capacity pool. Azure NetApp Files provides two QoS types of capacity pools--*auto (default)* and *manual*. +The QoS type is an attribute of a capacity pool. Azure NetApp Files provides two QoS types of capacity pools: *auto (default)* and *manual*. #### *Automatic (or auto)* QoS type In a manual QoS capacity pool, you can assign the capacity and throughput for a ##### Example of using manual QoS -When you use a manual QoS capacity pool with, for example, an SAP HANA system, an Oracle database, or other workloads requiring multiple volumes, the capacity pool can be used to create these application volumes. Each volume can provide the individual size and throughput to meet the application requirements. See [Throughput limit examples of volumes in a manual QoS capacity pool](azure-netapp-files-service-levels.md#throughput-limit-examples-of-volumes-in-a-manual-qos-capacity-pool) for details about the benefits. +When you use a manual QoS capacity pool with, for example, an SAP HANA system, an Oracle database, or other workloads requiring multiple volumes, the capacity pool can be used to create these application volumes. Each volume can provide the individual size and throughput to meet the application requirements. See [Throughput limit examples of volumes in a manual QoS capacity pool](azure-netapp-files-service-levels.md#throughput-limit-examples-of-volumes-in-a-manual-qos-capacity-pool) for details about the benefits. ## <a name="volumes"></a>Volumes When you use a manual QoS capacity pool with, for example, an SAP HANA system, a - A volume's capacity consumption counts against its pool's provisioned capacity. - A volumeΓÇÖs throughput consumption counts against its poolΓÇÖs available throughput. See [Manual QoS type](#manual-qos-type). - Each volume belongs to only one pool, but a pool can contain multiple volumes. +- Volumes contain a capacity of between 4 TiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 100 TiB and 500 TiB. ++## Large volumes ++Azure NetApp Files allows you to create volumes up to 500 TiB in size, exceeding the previous 100-TiB limit. Large volumes begin at a capacity of 102,401 GiB and scale up to 500 TiB. Regular Azure NetApp Files volumes are offered between 100 GiB and 102,400 GiB. ++For more information, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md). ## Next steps When you use a manual QoS capacity pool with, for example, an SAP HANA system, a - [Performance considerations for Azure NetApp Files](azure-netapp-files-performance-considerations.md) - [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md) - [Manage a manual QoS capacity pool](manage-manual-qos-capacity-pool.md)+- [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) |
azure-netapp-files | Backup Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md | This article describes the requirements and considerations you need to be aware You need to be aware of several requirements and considerations before using Azure NetApp Files backup: * Azure NetApp Files backup is available in the regions associated with your Azure NetApp Files subscription. -Azure NetApp Files backup in a region can only protect an Azure NetApp Files volume that is located in that same region. For example, backups created by the service in West US 2 for a volume located in West US 2 are sent to Azure storage that is located also in West US 2. Azure NetApp Files does not support backups or backup replication to a different region. +Azure NetApp Files backup in a region can only protect an Azure NetApp Files volume located in that same region. For example, backups created by the service in West US 2 for a volume located in West US 2 are sent to Azure storage also located in West US 2. Azure NetApp Files doesn't support backups or backup replication to a different region. * There can be a delay of up to 5 minutes in displaying a backup after the backup is actually completed. -* For large volumes (greater than 10 TB), it can take multiple hours to transfer all the data from the backup media. +* For volumes larger than 10 TB, it can take multiple hours to transfer all the data from the backup media. -* Currently, the Azure NetApp Files backup feature supports backing up the daily, weekly, and monthly local snapshots created by the associated snapshot policy to the Azure storage. Hourly backups are not currently supported. +* Currently, the Azure NetApp Files backup feature supports backing up the daily, weekly, and monthly local snapshots created by the associated snapshot policy to the Azure storage. Hourly backups aren't currently supported. -* Azure NetApp Files backup uses the [Zone-Redundant storage](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) (ZRS) account that replicates the data synchronously across three Azure availability zones in the region, except for the regions listed below where only [Locally Redundant Storage](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) (LRS) storage is supported: +* Azure NetApp Files backup uses the [Zone-Redundant storage](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) (ZRS) account that replicates the data synchronously across three Azure availability zones in the region, except for the regions listed where only [Locally Redundant Storage](../storage/common/storage-redundancy.md#redundancy-in-the-primary-region) (LRS) storage is supported: * West US LRS can recover from server-rack and drive failures. However, if a disaster such as a fire or flooding occurs within the data center, all replicas of a storage account using LRS might be lost or unrecoverable. * Using policy-based (scheduled) Azure NetApp Files backup requires that snapshot policy is configured and enabled. See [Manage snapshots by using Azure NetApp Files](azure-netapp-files-manage-snapshots.md). - The volume that needs to be backed up requires a configured snapshot policy for creating snapshots. The configured number of backups are stored in the Azure storage. + A configured snapshot policy for snapshots is required for the volume needing backup. The policy will also set the number of backups stored in Azure storage. -* If an issue occurs (for example, no sufficient space left on the volume) and causes the snapshot policy to stop creating new snapshots, the backup feature will not have any new snapshots to back up. +* If an issue occurs (for example, no sufficient space left on the volume) and causes the snapshot policy to stop creating new snapshots, the backup feature won't have any new snapshots to back up. -* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. It is not supported on a cross-region replication *destination* volume. +* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a cross-region replication *destination* volume. -* [Reverting a volume using snapshot revert](snapshots-revert-volume.md) is not supported on Azure NetApp Files volumes that have backups. +* [Reverting a volume using snapshot revert](snapshots-revert-volume.md) isn't supported on Azure NetApp Files volumes that have backups. -* See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups. +* See [Restore a backup to a new volume](backup-restore-new-volume.md) for other considerations related to restoring backups. * [Disabling backups](backup-disable.md) for a volume will delete all the backups stored in the Azure storage for that volume. If you delete a volume, the backups will remain. If you no longer need the backups, you should [manually delete the backups](backup-delete.md). -* If you need to delete a parent resource group or subscription that contains backups, you should delete any backups first. Deleting the resource group or subscription will not delete the backups. You can remove backups by [disabling backups](backup-disable.md) or [manually deleting the backups](backup-disable.md). -+* If you need to delete a parent resource group or subscription that contains backups, you should delete any backups first. Deleting the resource group or subscription won't delete the backups. You can remove backups by [disabling backups](backup-disable.md) or [manually deleting the backups](backup-disable.md). ## Next steps |
azure-netapp-files | Configure Ldap Over Tls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-ldap-over-tls.md | Title: Configure ADDS LDAP over TLS for Azure NetApp Files | Microsoft Docs -description: Describes how to configure ADDS LDAP over TLS for Azure NetApp Files, including root CA certificate management. + Title: Configure AD DS LDAP over TLS for Azure NetApp Files | Microsoft Docs +description: Describes how to configure AD DS LDAP over TLS for Azure NetApp Files, including root CA certificate management. documentationcenter: '' -# Configure ADDS LDAP over TLS for Azure NetApp Files +# Configure AD DS LDAP over TLS for Azure NetApp Files You can use LDAP over TLS to secure communication between an Azure NetApp Files volume and the Active Directory LDAP server. You can enable LDAP over TLS for NFS, SMB, and dual-protocol volumes of Azure NetApp Files. ## Considerations * DNS PTR records must exist for each AD DS domain controller assigned to the **AD Site Name** specified in the Azure NetApp Files Active Directory connection. -* PTR records must exist for all domain controllers in the site for ADDS LDAP over TLS to function properly. +* PTR records must exist for all domain controllers in the site for AD DS LDAP over TLS to function properly. ## Generate and export root CA certificate If you do not have a root CA certificate, you need to generate one and export it for use with LDAP over TLS authentication. -1. Follow [Install the Certification Authority](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) to install and configure ADDS Certificate Authority. +1. Follow [Install the Certification Authority](/windows-server/networking/core-network-guide/cncg/server-certs/install-the-certification-authority) to install and configure AD DS Certificate Authority. 2. Follow [View certificates with the MMC snap-in](/dotnet/framework/wcf/feature-details/how-to-view-certificates-with-the-mmc-snap-in) to use the MMC snap-in and the Certificate Manager tool. Use the Certificate Manager snap-in to locate the root or issuing certificate for the local device. You should run the Certificate Management snap-in commands from one of the following settings: If you do not have a root CA certificate, you need to generate one and export it ## Enable LDAP over TLS and upload root CA certificate -1. Go to the NetApp account that is used for the volume, and click **Active Directory connections**. Then, click **Join** to create a new AD connection or **Edit** to edit an existing AD connection. +1. Go to the NetApp account used for the volume, and select **Active Directory connections**. Then, select **Join** to create a new AD connection or **Edit** to edit an existing AD connection. -2. In the **Join Active Directory** or **Edit Active Directory** window that appears, select the **LDAP over TLS** checkbox to enable LDAP over TLS for the volume. Then click **Server root CA Certificate** and upload the [generated root CA certificate](#generate-and-export-root-ca-certificate) to use for LDAP over TLS. +2. In the **Join Active Directory** or **Edit Active Directory** window that appears, select the **LDAP over TLS** checkbox to enable LDAP over TLS for the volume. Then select **Server root CA Certificate** and upload the [generated root CA certificate](#generate-and-export-root-ca-certificate) to use for LDAP over TLS.  To resolve the error condition, upload a valid root CA certificate to your NetAp Disabling LDAP over TLS stops encrypting LDAP queries to Active Directory (LDAP server). There are no other precautions or impact on existing ANF volumes. -1. Go to the NetApp account that is used for the volume and click **Active Directory connections**. Then click **Edit** to edit the existing AD connection. +1. Go to the NetApp account that is used for the volume and select **Active Directory connections**. Then select **Edit** to edit the existing AD connection. -2. In the **Edit Active Directory** window that appears, deselect the **LDAP over TLS** checkbox and click **Save** to disable LDAP over TLS for the volume. +2. In the **Edit Active Directory** window that appears, deselect the **LDAP over TLS** checkbox and select **Save** to disable LDAP over TLS for the volume. ## Next steps |
azure-netapp-files | Create Volumes Dual Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md | To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu The **Available quota** field shows the amount of unused space in the chosen capacity pool that you can use towards creating a new volume. The size of the new volume must not exceed the available quota. + * **Large Volume** + If the quota of your volume is less than 100 TiB, select **No**. If the quota of your volume is greater than 100 TiB, select **Yes**. + [!INCLUDE [Large volumes warning](includes/large-volumes-notice.md)] + * **Throughput (MiB/S)** If the volume is created in a manual QoS capacity pool, specify the throughput you want for the volume. Follow instructions in [Configure an NFS client for Azure NetApp Files](configur ## Next steps * [Manage availability zone volume placement for Azure NetApp Files](manage-availability-zone-volume-placement.md)+* [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Configure an NFS client for Azure NetApp Files](configure-nfs-clients.md) * [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md). |
azure-netapp-files | Cross Region Replication Create Peering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-create-peering.md | To authorize the replication, you need to obtain the resource ID of the replicat * [Manage disaster recovery](cross-region-replication-manage-disaster-recovery.md) * [Delete volume replications or volumes](cross-region-replication-delete.md) * [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md)+* [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md) * [Manage Azure NetApp Files volume replication with the CLI](/cli/azure/netappfiles/volume/replication) |
azure-netapp-files | Cross Region Replication Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-delete.md | If you want to delete the source or destination volume, you must perform the fol * [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md) * [Display health status of replication relationship](cross-region-replication-display-health-status.md) * [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md)-+* [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md) |
azure-netapp-files | Cross Region Replication Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-requirements-considerations.md | This article describes requirements and considerations about [using the volume c * You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You cannot delete manual snapshots for the destination volume until the replication relationship is broken. * You can't revert a source or destination volume of cross-region replication to a snapshot. The snapshot revert functionality is greyed out for volumes in a replication relationship. -- ## Next steps * [Create volume replication](cross-region-replication-create-peering.md) * [Display health status of replication relationship](cross-region-replication-display-health-status.md) |
azure-netapp-files | Cross Zone Replication Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-requirements-considerations.md | This article describes requirements and considerations about [using the volume c ## Requirements and considerations * The cross-zone replication feature uses the [availability zone volume placement feature](use-availability-zones.md) of Azure NetApp Files.- * You can only use cross-zone replication in regions where the availability zone volume placement is supported. [!INCLUDE [Azure NetApp Files cross-zone-replication supported regions](includes/cross-zone-regions.md)] -* To establish cross-zone replication, the source volume needs to be created in an availability zone. + * You can only use cross-zone replication in regions that support the availability zone volume placement. [!INCLUDE [Azure NetApp Files cross-zone-replication supported regions](includes/cross-zone-regions.md)] +* To establish cross-zone replication, you must create the source volume in an availability zone. * You canΓÇÖt use cross-zone replication and cross-region replication together on the same source volume.-* SMB volumes are supported along with NFS volumes. Replication of SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or AD DS Domain Controllers that are reachable from the delegated subnet in the destination zone. For more information, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). +* You can use cross-zone replication with SMB and NFS volumes. Replication of SMB volumes requires an Active Directory connection in the source and destination NetApp accounts. The destination AD connection must have access to the DNS servers or AD DS Domain Controllers that are reachable from the delegated subnet in the destination zone. For more information, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). * The destination account must be in a different zone from the source volume zone. You can also select an existing NetApp account in a different zone. -* The replication destination volume is read-only until you fail over to the destination zone to enable the destination volume for read and write. For more information about the failover process, refer to [fail over to the destination volume](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume). +* The replication destination volume is read-only until you fail over to the destination zone to enable the destination volume for read and write. For more information about the failover process, see [fail over to the destination volume](cross-region-replication-manage-disaster-recovery.md#fail-over-to-destination-volume). * Azure NetApp Files replication doesn't currently support multiple subscriptions; all replications must be performed under a single subscription. * See [resource limits](azure-netapp-files-resource-limits.md) for the maximum number of cross-zone destination volumes. You can open a support ticket to [request a limit increase](azure-netapp-files-resource-limits.md#request-limit-increase) in the default quota of replication destination volumes (per subscription in a region). * There can be a delay up to five minutes for the interface to reflect a newly added snapshot on the source volume. -* Cascading and fan in/out topologies aren't supported. -* Configuring volume replication for source volumes created from snapshot isn't supported at this time. -* After you set up cross-zone replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete SnapMirror snapshots until replication relationship and volume is deleted. +* Cross-zone replication does not support cascading and fan in/out topologies. +* At this time, you can't configure volume replication for source volumes created from snapshot with cross-zone replication. +* After you set up cross-zone replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete SnapMirror snapshots until you delete the replication relationship and volume. * You cannot mount a dual-protocol volume until you [authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume) and the initial [transfer](cross-region-replication-display-health-status.md#display-replication-status) happens.-* You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after the replication relationship is deleted. You cannot delete manual snapshots for the destination volume until the replication relationship is broken. -* You can't revert a source or destination volume of cross-zone replication to a snapshot. The snapshot revert functionality is greyed out for volumes in a replication relationship. +* You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after you've deleted replication relationship. You cannot delete manual snapshots for the destination volume until you break the replication relationship. +* You can't revert a source or destination volume of cross-zone replication to a snapshot. The snapshot revert functionality is unavailable out for volumes in a replication relationship. +* You can't currently use cross-zone replication with [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) (larger than 100 TiB). ## Next steps * [Understand cross-zone replication](cross-zone-replication-introduction.md) |
azure-netapp-files | Default Individual User Group Quotas Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/default-individual-user-group-quotas-introduction.md | + + Title: Understand default and individual user and group quotas for Azure NetApp Files volumes | Microsoft Docs +description: Helps you understand the use cases of managing default and individual user and group quotas for Azure NetApp Files volumes. ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na + Last updated : 02/23/2023+++# Understand default and individual user and group quotas ++User and group quotas enable you to restrict the logical space that a user or group can consume in a volume. User and group quotas apply to a specific Azure NetApp Files volume. ++## Introduction ++You can restrict user capacity consumption on Azure NetApp Files volumes by setting user and/or group quotas on volumes. User and group quotas differ from volume quotas in the way that they further restrict volume capacity consumption at the user and group level. ++To set a [volume quota](volume-quota-introduction.md), you can use the Azure portal or the Azure NetApp Files API to specify the maximum storage capacity for a volume. Once you set the volume quota, it defines the size of the volume, and there's no restriction on how much capacity any user can consume. ++To restrict usersΓÇÖ capacity consumption, you can set a user and/or group quota. You can set default and/or individual quotas. Once you set user or group quotas, users can't store more data in the volume than the specified user or group quota limit. ++By combining volume and user quotas, you can ensure that storage capacity is distributed efficiently and prevent any single user, or group of users, from consuming excessive amounts of storage. ++To understand considerations and manage user and group quotas for Azure NetApp Files volumes, see [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md). ++## Behavior of default and individual user and group quotas ++This section describes the behavior of user and group quotas. ++The following concepts and behavioral aspects apply to user and group quotas: +* The volume capacity that can be consumed can be restricted at the user and/or group level. + * User quotas are available for SMB, NFS, and dual-protocol volumes. + * Group quotas are **not** supported on SMB and dual-protocol volumes. +* When a user or group consumption reaches the maximum configured quota, further space consumption is prohibited. +* Individual user quota takes precedence over default user quota. +* Individual group quota takes precedence over default group quota. +* If you set group quota and user quota, the most restrictive quota is the effective quota. ++The following subsections describe and depict the behavior of the various quota types. ++### Default user quota ++A default user quota automatically applies a quota limit to *all users* accessing the volume without creating separate quotas for each target user. Each user can only consume the amount of storage as defined by the default user quota setting. No single user can exhaust the volumeΓÇÖs capacity, as long as the default user quota is less than the volume quota. The following diagram depicts this behavior. +++### Individual user quota ++An individual user quota applies a quota to *individual target user* accessing the volume. You can specify the target user by a UNIX user ID (UID) or a Windows security identifier (SID), depending on volume protocol (NFS or SMB). You can define multiple individual user quota settings on a volume. Each user can only consume the amount of storage as defined by their individual user quota setting. No single user can exhaust the volumeΓÇÖs capacity, as long as the individual user quota is less than the volume quota. Individual user quotas override a default user quota, where applicable. The following diagram depicts this behavior. +++### Combining default and individual user quotas ++You can create quota exceptions for specific users by allowing those users less or more capacity than a default user quota setting by combining default and individual user quota settings. In the following example, individual user quotas are set for `user1`, `user2`, and `user3`. Any other user is subjected to the default user quota setting. The individual quota settings can be smaller or larger than the default user quota setting. The following diagram depicts this behavior. +++### Default group quota ++A default group quota automatically applies a quota limit to *all users within all groups* accessing the volume without creating separate quotas for each target group. The total consumption for all users in any group can't exceed the group quota limit. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. A single user can potentially consume the entire group quota. The following diagram depicts this behavior. +++### Individual group quota ++An individual group quota applies a quota to *all users within an individual target group* accessing the volume. The total consumption for all users *in that group* can't exceed the group quota limit. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. You specify the group by a UNIX group ID (GID). Individual group quotas override default group quotas where applicable. The following diagram depicts this behavior. +++### Combining individual and default group quota ++You can create quota exceptions for specific groups by allowing those groups less or more capacity than a default group quota setting by combining default and individual group quota settings. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. In the following example, individual group quotas are set for `group1` and `group2`. Any other group is subjected to the default group quota setting. The individual group quota settings can be smaller or larger than the default group quota setting. The following diagram depicts this scenario. +++### Combining default and individual user and group quotas ++You can combine the various previously described quota options to achieve very specific quota definitions. You can create very specific quota definitions by (optionally) starting with defining a default group quota, followed by individual group quotas matching your requirements. Then you can further tighten individual user consumption by first (optionally) defining a default user quota, followed by individual user quotas matching individual user requirements. Group quotas arenΓÇÖt applicable to SMB and dual-protocol volumes. In the following example, a default group quota has been set as well as individual group quotas for `group1` and `group2`. Furthermore, a default user quota has been set as well as individual quotas for `user1`, `user2`, `user3`, `user5`, and `userZ`. The following diagram depicts this scenario. +++## Observing user quota settings and consumption ++Users can observe user quota settings and consumption from their client systems connected to the NFS, SMB, or dual-protocol volumes respectively. Azure NetApp Files currently doesn't support reporting of group quota settings and consumption explicitly. The following sections describe how users can view their user quota setting and consumption. ++### Windows client ++Windows users can observe their user quota and consumption in Windows Explorer and by running the dir command. Assume a scenario where a 2-TiB volume with a 100-MiB default or individual user quota has been configured. On the client, this scenario is represented as follows: ++* Administrator view: ++ :::image type="content" source="../media/azure-netapp-files/user-quota-administrator-view.png" alt-text="Screenshot showing administrator view of user quota and consumption."::: ++* User view: ++ :::image type="content" source="../media/azure-netapp-files/user-quota-user-view.png" alt-text="Screenshot showing user view of user quota and consumption."::: ++### Linux client ++Linux users can observe their *user* quota and consumption by using the [`quota(1)`](https://man7.org/linux/man-pages/man1/quota.1.html) command. Assume a scenario where a 2-TiB volume with a 100-MiB default or individual user quota has been configured. On the client, this scenario is represented as follows: +++Azure NetApp Files currently doesn't support group quota reporting. However, you know you've reached your groupΓÇÖs quota limit when you receive a `Disk quota exceeded` error in writing to the volume while you havenΓÇÖt reached your user quota yet. ++In the following scenario, users `user4` and `user5` are members of `group2`. The group `group2` has a 200-MiB default or individual group quota assigned. The volume is already populated with 150 MiB of data owned by user `user4`. User `user5` appears to have a 100-MiB quota available as reported by the `quota(1)` command, but `user5` canΓÇÖt consume more than 50 MiB due to the remaining group quota for `group2`. User `user5` receives a `Disk quota exceeded` error message after writing 50 MiB, despite not reaching the user quota. +++> [!IMPORTANT] +> For quota reporting to work, the client needs access to port 4049/UDP on the Azure NetApp Files volumesΓÇÖ storage endpoint. When using NSGs with standard network features on the Azure NetApp Files delegated subnet, make sure that access is enabled. ++## Next steps ++* [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md) +* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) +* [Security identifiers](/windows-server/identity/ad-ds/manage/understand-security-identifiers) |
azure-netapp-files | Large Volumes Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md | + + Title: Requirements and considerations for large volumes | Microsoft Docs +description: Describes the requirements and considerations you need to be aware of before using large volumes. ++documentationcenter: '' +++editor: '' ++ms.assetid: ++++ na + Last updated : 02/23/2023+++# Requirements and considerations for large volumes (preview) ++This article describes the requirements and considerations you need to be aware of before using [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) on Azure NetApp Files. ++## Register the feature ++The large volumes feature for Azure NetApp Files is currently in public preview. This preview is offered under the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) and is controlled via Azure Feature Exposure Control (AFEC) settings on a per subscription basis. ++To enroll in the preview for large volumes, use the [large volumes preview sign-up form](https://aka.ms/anflargevolumespreviewsignup). ++## Requirements and considerations ++* Existing regular volumes can't be resized over 100 TiB. You can't convert regular Azure NetApp Files volumes to large volumes. +* You must create a large volume at a size greater than 100 TiB. A single volume can't exceed 500 TiB. +* You can't resize a large volume to less than 100 TiB. You can only resize a large volume can up to 30% of lowest provisioned size. +* Large volumes are currently not supported with Azure NetApp Files backup. +* Large volumes are not currently supported with cross-region replication. +* You can't create a large volume with application volume groups. +* Large volumes aren't currently supported with cross-zone replication. +* The SDK for large volumes isn't currently available. +* Throughput ceilings for the three performance tiers (Standard, Premium, and Ultra) of large volumes are based on the existing 100-TiB maximum capacity targets. You'll be able to grow to 500 TiB with the throughput ceiling as per the table below. ++| Capacity tier | Volume size (TiB) | Throughput (MiB/s) | +| | | | +| Standard | 100 to 500 | 1,600 | +| Premium | 100 to 500 | 6,400 | +| Ultra | 100 to 500 | 10,240 | ++## Supported regions ++Support for Azure NetApp Files large volumes is available in the following regions: ++* Australia East +* Australia Southeast +* Brazil South +* Canada Central +* Central US +* East US +* East US 2 +* Germany West Central +* Japan East +* North Central US +* North Europe +* South Central US +* Switzerland North +* UAE North +* UK West +* UK South +* West Europe +* West US +* West US 2 +* West US 3 ++## Configure large volumes ++>[!IMPORTANT] +>Before you can use large volumes, you must first request [an increase in regional capacity quota](azure-netapp-files-resource-limits.md#request-limit-increase). ++Once your [regional capacity quota](regional-capacity-quota.md) has increased, you can create volumes that are up to 500 TiB in size. When creating a volume, after you designate the volume quota, you must select **Yes** for the **Large volume** field. Once created, you can manage your large volumes in the same manner as regular volumes. ++## Next steps ++* [Storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) +* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) +* [Create an NFS volume](azure-netapp-files-create-volumes.md) +* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md) +* [Create a dual-protocol volume](create-volumes-dual-protocol.md) |
azure-netapp-files | Manage Default Individual User Group Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md | + + Title: Manage default and individual user and group quotas for Azure NetApp Files volumes | Microsoft Docs +description: Describes the considerations and steps for managing user and group quotas for Azure NetApp Files volumes. ++++++ Last updated : 02/23/2023++# Manage default and individual user and group quotas for a volume ++This article explains the considerations and steps for managing user and group quotas on Azure NetApp Files volumes. To understand the use cases for this feature, see [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md). ++## Quotas in cross-region replication relationships ++Quota rules are synced from cross-region replication (CRR) source to destination volumes. Quota rules that you create, delete, or update on a CRR source volume automatically applies to the CRR destination volume. ++Quota rules only come into effect on the CRR destination volume after the replication relationship is deleted because the destination volume is read-only. To learn how to break the replication relationship, see [Delete volume replications](cross-region-replication-delete.md#delete-volume-replications). If source volumes have quota rules and you create the CRR destination volume at the same time as the source volume, all the quota rules are created on destination volume. ++## Considerations ++* A quota rule is specific to a volume and is applied to an existing volume. +* Deleting a volume results in deleting all the associated quota rules for that volume. +* You can create a maximum number of 100 quota rules for a volume. You can [request limit increase](azure-netapp-files-resource-limits.md#request-limit-increase) through the portal. +* Azure NetApp Files doesn't support individual group quota and default group quota for SMB and dual protocol volumes. +* Group quotas track the consumption of disk space for files owned by a particular group. A file can only be owned by exactly one group. +* Auxiliary groups only help in permission checks. You can't use auxiliary groups to restrict the quota (disk space) for a file. +* In a cross-region replication setting: + * Currently, Azure NetApp Files doesn't support syncing quota rules to the destination (data protection) volume. + * You can’t create quota rules on the destination volume until you [delete the replication](cross-region-replication-delete.md). + * You need to manually create quota rules on the destination volume if you want them for the volume, and you can do so only after you delete the replication. + * If a quota rule is in the error state after you delete the replication relationship, you need to delete and re-create the quota rule on the destination volume. + * During sync or reverse resync operations: + * If you create, update, or delete a rule on a source volume, you must perform the same operation on the destination volume. + * If you create, update, or delete a rule on a destination volume after the deletion of the replication relationship, the rule will be reverted to keep the source and destination volumes in sync. +* If you're using [large volumes](large-volumes-requirements-considerations.md) (volumes larger than 100 TiB):     + * The space and file usage in a large volume might exceed as much as five percent more than the configured hard limit before the quota limit is enforced and rejects traffic.    + * To provide optimal performance, the space consumption may exceed configured hard limit before the quota is enforced. The additional space consumption won't exceed either the lower of 1 GB or five percent of the configured hard limit.     + * After reaching the quota limit, if a user or administrator deletes files or directories to reduce quota usage under the limit, subsequent quota-consuming file operations may resume with a delay of up to five seconds. ++## Register the feature ++The feature to manage user and group quotas is currently in preview. Before using this feature for the first time, you need to register it. ++1. Register the feature: ++ ```azurepowershell-interactive + Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEnableVolumeUserGroupQuota + ``` ++2. Check the status of the feature registration: ++ ```azurepowershell-interactive + Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEnableVolumeUserGroupQuota + ``` + > [!NOTE] + > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing. ++You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. ++## Create new quota rules ++1. From the Azure portal, navigate to the volume for which you want to create a quota rule. Select **User and group quotas** in the navigation pane, then click **Add** to create a quota rule for a volume. ++  ++2. In the **New quota** window that appears, provide information for the following fields, then click **Create**. ++ * **Quota rule name**: + The name must be unique within the volume. ++ * **Quota type**: + Select one of the following options. For details, see [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md). + * `Default user quota` + * `Default group quota` + * `Individual user quota` + * `Individual group quota` ++ * **Quota target**: + * NFS volumes: + For individual user quota and individual group quota, specify a value in the range of `0` to `4294967295`. + For default quota, specify the value as `""`. + * SMB volumes: + For individual user quota, specify the range in the `^S-1-[0-59]-\d{2}-\d{8,10}-\d{8,10}-\d{8,10}-[1-9]\d{3}` format. + * Dual-protocol volumes: + For individual user quota using the SMB protocol, specify the range in the `^S-1-[0-59]-\d{2}-\d{8,10}-\d{8,10}-\d{8,10}-[1-9]\d{3}` format. + For individual user quota using the NFS protocol, specify a value in the range of `0` to `4294967295`. ++ * **Quota limit**: + Specify the limit in the range of `4` to `1125899906842620`. + Select `KiB`, `MiB`, `GiB`, or `TiB` from the pulldown. ++## Edit or delete quota rules ++1. On the Azure portal, navigate to the volume whose quota rule you want to edit or delete. Select `…` at the end of the quota rule row, then select **Edit** or **Delete** as appropriate. ++  ++ 1. If you're editing a quota rule, update **Quota Limit** in the Edit User Quota Rule window that appears. + +  ++ 1. If you're deleting a quota rule, confirm the deletion by selecting **Yes**. + +  ++## Next steps +* [Understand default and individual user and group quotas](default-individual-user-group-quotas-introduction.md) +* [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) |
azure-netapp-files | Volume Hard Quota Guidelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/volume-hard-quota-guidelines.md | -From the beginning of the service, Azure NetApp Files has been using a capacity-pool provisioning and automatic growth mechanism. Azure NetApp Files volumes are thinly provisioned on an underlaying, customer-provisioned capacity pool of a selected tier and size. Volume sizes (quotas) are used to provide performance and capacity, and the quotas can be adjusted on-the-fly at any time. This behavior means that, currently, the volume quota is a performance lever used to control bandwidth to the volume. Currently, underlaying capacity pools automatically grow when the capacity fills up. +From the beginning of the service, Azure NetApp Files has been using a capacity-pool provisioning and automatic growth mechanism. Azure NetApp Files volumes are thinly provisioned on an underlying, customer-provisioned capacity pool of a selected tier and size. Volume sizes (quotas) are used to provide performance and capacity, and the quotas can be adjusted on-the-fly at any time. This behavior means that, currently, the volume quota is a performance lever used to control bandwidth to the volume. Currently, underlaying capacity pools automatically grow when the capacity fills up. > [!IMPORTANT] > The Azure NetApp Files behavior of volume and capacity pool provisioning will change to a *manual* and *controllable* mechanism. **Starting from April 30, 2021 (updated), volume sizes (quota) will manage bandwidth performance, as well as provisioned capacity, and underlying capacity pools will no longer grow automatically.** Because of the volume hard quota change, you should change your operating model. The volume hard quota change will result in changes in provisioned and available capacity for previously provisioned volumes and pools. As a result, some capacity allocation challenges might happen. To avoid short-term out-of-space situations for customers, the Azure NetApp Files team recommends the following, one-time corrective/preventative measures: * **Provisioned volume sizes**: - Resize every provisioned volume to have appropriate buffer based on change rate and alerting or resize turnaround time (for example, 20% based on typical workload considerations), with a maximum of 100 TiB (which is the [volume size limit](azure-netapp-files-resource-limits.md#resource-limits)). This new volume size, including buffer capacity, should be based on the following factors: + Resize every provisioned volume to have appropriate buffer based on change rate and alerting or resize turnaround time (for example, 20% based on typical workload considerations), with a maximum of 100 TiB (which is the regular [volume size limit](azure-netapp-files-resource-limits.md#resource-limits). This new volume size, including buffer capacity, should be based on the following factors: * **Provisioned** volume capacity, in case the used capacity is less than the provisioned volume quota. * **Used** volume capacity, in case the used capacity is more than the provisioned volume quota. There is no additional charge for volume-level capacity increase if the underlaying capacity pool does not need to be grown. As an effect of this change, you might observe a bandwidth limit *increase* for the volume (in case the [auto QoS capacity pool type](azure-netapp-files-understand-storage-hierarchy.md#qos_types) is used). |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t ## February 2023 +* [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) (Preview) ++ Azure NetApp Files volumes provide flexible, large and scalable storage shares for applications and users. Storage capacity and consumption by users is only limited by the size of the volume. In some scenarios, you may want to limit this storage consumption of users and groups within the volume. With Azure NetApp Files volume and group quotas, you can now do so. User and/or group quotas enable you to restrict the storage space that a user or group can use within a specific Azure NetApp Files volume. You can choose to set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can set default (same for all users) or individual group quotas. ++* [Large volumes](large-volumes-requirements-considerations.md) (Preview) ++ Regular Azure NetApp Files volumes are limited to 100 TiB in size. Azure NetApp Files [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) break this barrier by enabling volumes of 100 TiB to 500 TiB in size. The large volumes capability enables a variety of use cases and workloads that require large volumes with a single directory namespace. + * [Customer-managed keys](configure-customer-managed-keys.md) (Preview) Azure NetApp Files volumes now support encryption with customer-managed keys and Azure Key Vault to enable an extra layer of security for data at rest. |
cognitive-services | Get Started Intent Recognition Clu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-intent-recognition-clu.md | + + Title: "Intent recognition with CLU quickstart - Speech service" ++description: In this quickstart, you recognize intents from audio data with the Speech service and Language service. ++++++ Last updated : 02/22/2023++zone_pivot_groups: programming-languages-set-thirteen +keywords: intent recognition +++# Quickstart: Recognize intents with Conversational Language Understanding +++++## Next steps ++> [!div class="nextstepaction"] +> [Learn more about speech recognition](how-to-recognize-speech.md) |
cognitive-services | Get Started Intent Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md | keywords: intent recognition # Quickstart: Recognize intents with the Speech service and LUIS +> [!IMPORTANT] +> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](/azure/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis) to [conversational language understanding](/azure/cognitive-services/language-service/conversational-language-understanding/overview) to benefit from continued product support and multilingual capabilities. +> +> Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU. + ::: zone pivot="programming-language-csharp" [!INCLUDE [C# include](includes/quickstarts/intent-recognition/csharp.md)] ::: zone-end |
cognitive-services | How To Pronunciation Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md | Pronunciation assessment results for the spoken word "hello" are shown as a JSON } ``` +## Pronunciation assessment in streaming mode ++Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore` , and `CompletenessScore` will vary over time throughout the recording and evaluation process. +++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#:~:text=PronunciationAssessmentWithStream). ++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#:~:text=PronunciationAssessmentWithStream). ++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/java/android/sdkdemo/app/src/main/java/com/microsoft/cognitiveservices/speech/samples/sdkdemo/MainActivity.java#L548). +++++++For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessment.js). ++++++++ ## Next steps +- Learn our quality [benchmark](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/speech-service-update-hierarchical-transformer-for-pronunciation/ba-p/3740866) - Try out [pronunciation assessment in Speech Studio](pronunciation-assessment-tool.md)-- Try out the [pronunciation assessment demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment.+- Check out easy-to-deploy Pronunciation Assessment [demo](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment/BrowserJS) and watch the [video tutorial](https://www.youtube.com/watch?v=zFlwm7N4Awc) of pronunciation assessment. |
cognitive-services | How To Use Custom Entity Pattern Matching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-custom-entity-pattern-matching.md | In this guide, you use the Speech SDK to develop a console application that deri ## When to use pattern matching -Use this sample code if: -* You're only interested in matching strictly what the user said. These patterns match more aggressively than LUIS. -* You don't have access to a [LUIS](../LUIS/index.yml) app, but still want intents. -* You can't or don't want to create a [LUIS](../LUIS/index.yml) app but you still want some voice-commanding capability. +Use pattern matching if: +* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview). +* You don't have access to a CLU model, but still want intents. For more information, see the [pattern matching overview](./pattern-matching-overview.md). |
cognitive-services | How To Use Simple Language Pattern Matching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-use-simple-language-pattern-matching.md | In this guide, you use the Speech SDK to develop a C++ console application that ## When to use pattern matching -Use this sample code if: -* You're only interested in matching strictly what the user said. These patterns match more aggressively than LUIS. -* You don't have access to a [LUIS](../LUIS/index.yml) app, but still want intents. -* You can't or don't want to create a [LUIS](../LUIS/index.yml) app but you still want some voice-commanding capability. +Use pattern matching if: +* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview). +* You don't have access to a CLU model, but still want intents. For more information, see the [pattern matching overview](./pattern-matching-overview.md). |
cognitive-services | Intent Recognition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/intent-recognition.md | keywords: intent recognition In this overview, you will learn about the benefits and capabilities of intent recognition. The Cognitive Services Speech SDK provides two ways to recognize intents, both described below. An intent is something the user wants to do: book a flight, check the weather, or make a call. Using intent recognition, your applications, tools, and devices can determine what the user wants to initiate or do based on options you define in the Intent Recognizer or LUIS. ## Pattern matching-The SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using LUIS or a combination of the two. -## LUIS (Language Understanding Intent Service) -The Microsoft LUIS service is available as a complete AI intent service that works well when your domain of possible intents is large and you are not really sure what the user will say. It supports many complex scenarios, intents, and entities. +The Speech SDK provides an embedded pattern matcher that you can use to recognize intents in a very strict way. This is useful for when you need a quick offline solution. This works especially well when the user is going to be trained in some way or can be expected to use specific phrases to trigger intents. For example: "Go to floor seven", or "Turn on the lamp" etc. It is recommended to start here and if it no longer meets your needs, switch to using LUIS or a combination of the two. -### LUIS key required +Use pattern matching if: +* You're only interested in matching strictly what the user said. These patterns match more aggressively than [conversational language understanding (CLU)](/azure/cognitive-services/language-service/conversational-language-understanding/overview). +* You don't have access to a CLU model, but still want intents. -* LUIS integrates with the Speech service to recognize intents from speech. You don't need a Speech service subscription, just LUIS. -* Speech intent recognition is integrated with the Speech SDK. You can use a LUIS key with the Speech service. -* Intent recognition through the Speech SDK is [offered in a subset of regions supported by LUIS](./regions.md#intent-recognition). +For more information, see the [pattern matching concepts](./pattern-matching-overview.md) and then: +* Start with [simple pattern matching](how-to-use-simple-language-pattern-matching.md). +* Improve your pattern matching by using [custom entities](how-to-use-custom-entity-pattern-matching.md). -## Get started -See this [how-to](how-to-use-simple-language-pattern-matching.md) to get started with pattern matching. +## Conversational Language Understanding -See this [quickstart](get-started-intent-recognition.md) to get started with LUIS intent recognition. +Conversational language understanding (CLU) enables users to build custom natural language understanding models to predict the overall intention of an incoming utterance and extract important information from it. -## Sample code +Both a Speech resource and Language resource are required to use CLU with the Speech SDK. The Speech resource is used to transcribe the user's speech into text, and the Language resource is used to recognize the intent of the utterance. To get started, see the [quickstart](get-started-intent-recognition-clu.md). -Sample code for intent recognition: +> [!IMPORTANT] +> When you use conversational language understanding with the Speech SDK, you are charged both for the Speech-to-text recognition request and the Language service request for CLU. For more information about pricing for conversational language understanding, see [Language service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/language-service/). -* [Quickstart: Use prebuilt Home automation app](../luis/luis-get-started-create-app.md) -* [Recognize intents from speech using the Speech SDK for C#](./how-to-recognize-intents-from-speech-csharp.md) -* [Intent recognition and other Speech services using Unity in C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/csharp/unity/speechrecognizer) -* [Recognize intents using Speech SDK for Python](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/python/console) -* [Intent recognition and other Speech services using the Speech SDK for C++ on Windows](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/cpp/windows/console) -* [Intent recognition and other Speech services using the Speech SDK for Java on Windows or Linux](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/java/jre/console) -* [Intent recognition and other Speech services using the Speech SDK for JavaScript on a web browser](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser) +For information about how to use conversational language understanding without the Speech SDK and without speech recognition, see the [Language service documentation](/azure/cognitive-services/language-service/conversational-language-understanding/overview). -## Reference docs --* [Speech SDK](./speech-sdk.md) +> [!IMPORTANT] +> LUIS will be retired on October 1st 2025 and starting April 1st 2023 you will not be able to create new LUIS resources. We recommend [migrating your LUIS applications](/azure/cognitive-services/language-service/conversational-language-understanding/how-to/migrate-from-luis) to [conversational language understanding](/azure/cognitive-services/language-service/conversational-language-understanding/overview) to benefit from continued product support and multilingual capabilities. +> +> Conversational Language Understanding (CLU) is available for C# and C++ with the [Speech SDK](speech-sdk.md) version 1.25 or later. See the [quickstart](get-started-intent-recognition-clu.md) to recognize intents with the Speech SDK and CLU. ## Next steps -* [Intent recognition quickstart](get-started-intent-recognition.md) -* [Get the Speech SDK](speech-sdk.md) +* [Intent recognition with simple pattern matching](how-to-use-simple-language-pattern-matching.md) +* [Intent recognition with CLU quickstart](get-started-intent-recognition-clu.md) |
cognitive-services | Pronunciation Assessment Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md | Pronunciation assessment uses the Speech-to-Text capability to provide subjectiv Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input. - At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech. - At the word-level, pronunciation assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.-- Syllable-level accuracy scores are currently only available via the [JSON file](?tabs=json#scores-within-words) or [Speech SDK](how-to-pronunciation-assessment.md).+- Syllable-level accuracy scores are currently available via the [JSON file](?tabs=json#pronunciation-assessment-results) or [Speech SDK](how-to-pronunciation-assessment.md). - At the phoneme level, pronunciation assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech. This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md). Follow these steps to assess your pronunciation of the reference text: :::image type="content" source="media/pronunciation-assessment/pa-upload.png" alt-text="Screenshot of uploading recorded audio to be assessed."::: - ## Pronunciation assessment results Once you've recorded the reference text or uploaded the recorded audio, the **Assessment result** will be output. The result includes your spoken audio and the feedback on the accuracy and fluency of spoken audio, by comparing a machine generated transcript of the input audio with the reference text. You can listen to your spoken audio, and download it if necessary. You can also check the pronunciation assessment result in JSON. The word-level, syllable-level, and phoneme-level accuracy scores are included in the JSON file. -### Overall scores --Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score**. The **Accuracy score** and the **Fluency score** will vary over time throughout the recording process. The **Completeness score** is only calculated at the end of the evaluation. The **Pronunciation score** is overall score indicating the pronunciation quality of the given speech. During recording, the **Pronunciation score** is aggregated from **Accuracy score** and **Fluency score** with weight. Once completing recording, this overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight. --**During recording** ---**Completing recording** ---### Scores within words - ### [Display](#tab/display) The complete transcription is shown in the **Display** window. If a word is omitted, inserted, or mispronounced compared to the reference text, the word will be highlighted according to the error type. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes. The complete transcription is shown in the `text` attribute. You can see accurac +### Assessment scores in streaming mode ++Pronunciation Assessment supports uninterrupted streaming mode. The demo on the Speech Studio supports up to 60 minutes of recording in streaming mode for evaluation. The Speech Studio demo allows for up to 60 minutes of recording in streaming mode for evaluation. As long as you do not press the stop recording button, the evaluation process does not finish and you can pause and resume evaluation conveniently. ++Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score** as aggregated overall score which includes 3 sub aspects: **Accuracy score**, **Fluency score**, and **Completeness score**. In streaming mode, since the **Accuracy score**, **Fluency score and Completeness score** will vary over time throughout the recording process, we demonstrate an approach on Speech Studio to display approximate overall score incrementally before the end of the evaluation, which weighted only with Accuracy score and Fluency score. The **Completeness score** is only calculated at the end of the evaluation after you press the stop button, so the final overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight. +Refer to the demo examples below for the whole process of evaluating pronunciation in streaming mode. ++**Start recording** ++As you start recording, the scores at the bottom begin to alter from 0. +++**During recording** ++During recording a long paragraph, you can pause recording at any time. You can continue to evaluate your recording as long as you don't press the stop button. +++**Finish recording** ++After you press the stop button, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score** at the bottom. + ## Next steps |
cognitive-services | Record Custom Voice Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md | A highly-natural custom neural voice depends on several factors, like the qualit The quality of your training data is a primary factor. For example, in the same training set, consistent volume, speaking rate, speaking pitch, and speaking style are essential to create a high-quality custom neural voice. You should also avoid background noise in the recording and make sure the script and recording match. To ensure the quality of your data, you need to follow [script selection criteria](#script-selection-criteria) and [recording requirements](#recording-your-script). -Regarding the size of the training data, in most cases you can build a reasonable custom neural voice with 500 utterances. According to our tests, adding more training data in most languages does not necessarily improve naturalness of the voice itself (tested using the MOS score), however, with more training data that covers more word instances, you have higher possibility to reduce the DSAT (dis-satisfied part of the speech, for example, the glitches) ratio for the voice. +Regarding the size of the training data, in most cases you can build a reasonable custom neural voice with 500 utterances. According to our tests, adding more training data in most languages does not necessarily improve naturalness of the voice itself (tested using the MOS score), however, with more training data that covers more word instances, you have higher possibility to reduce the ratio of dissatisfactory parts of speech for the voice, such as the glitches. To hear what dissatisfactory parts of speech sound like, refer to [the GitHub examples](https://github.com/Azure-Samples/Cognitive-Speech-TTS/blob/master/CustomVoice/DSAT-examples.md). In some cases, you may want a voice persona with unique characteristics. For example, a cartoon persona needs a voice with a special speaking style, or a voice that is very dynamic in intonation. For such cases, we recommend that you prepare at least 1000 (preferably 2000) utterances, and record them at a professional recording studio. To learn more about how to improve the quality of your voice model, see [characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context). |
cognitive-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md | +For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). + ## Quotas and limits reference -The following sections provide you with a quick guide to the quotas and limits that apply to Speech service. +The following sections provide you with a quick guide to the quotas and limits that apply to the Speech service. ++For information about adjustable quotas for Standard (S0) Speech resources, see [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). The quotas and limits for Free (F0) Speech resources aren't adjustable. ### Speech-to-text quotas and limits per resource -In the following tables, the parameters without the **Adjustable** row aren't adjustable for all price tiers. +This section describes speech-to-text quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable. -#### Online transcription +#### Online transcription and speech translation You can use online transcription with the [Speech SDK](speech-sdk.md) or the [speech-to-text REST API for short audio](rest-speech-to-text-short.md). -| Quota | Free (F0)<sup>1</sup> | Standard (S0) | +> [!IMPORTANT] +> These limits apply to concurrent speech-to-text online transcription requests and speech translation requests combined. For example, if you have 60 concurrent speech-to-text requests and 40 concurrent speech translation requests, you'll reach the limit of 100 concurrent requests. ++| Quota | Free (F0) | Standard (S0) | |--|--|--|-| Concurrent request limit - base model endpoint | 1 | 100 (default value) | -| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> | -| Concurrent request limit - custom endpoint | 1 | 100 (default value) | -| Adjustable | No<sup>2</sup> | Yes<sup>2</sup> | +| Concurrent request limit - base model endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). | +| Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit). | #### Batch transcription -| Quota | Free (F0)<sup>1</sup> | Standard (S0) | +| Quota | Free (F0) | Standard (S0) | |--|--|--| | [Speech-to-text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute | | Max audio input file size | N/A | 1 GB | You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp #### Model customization -| Quota | Free (F0)<sup>1</sup> | Standard (S0) | +The limits in this table apply per Speech resource when you create a Custom Speech model. ++| Quota | Free (F0) | Standard (S0) | |--|--|--| | REST API limit | 300 requests per minute | 300 requests per minute | | Max number of speech datasets | 2 | 500 | You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp | Max pronunciation dataset file size for data import | 1 KB | 1 MB | | Max text size when you're using the `text` parameter in the [Models_Create](https://westcentralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Models_Create/) API request | 200 KB | 500 KB | -<sup>1</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/> -<sup>2</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-online-transcription-concurrent-request-limit).<br/> +### Text-to-speech quotas and limits per resource -### Text-to-speech quotas and limits per Speech resource +This section describes text-to-speech quotas and limits per Speech resource. Unless otherwise specified, the limits aren't adjustable. -In the following tables, the parameters without the **Adjustable** row aren't adjustable for all price tiers. +#### Common text-to-speech quotas and limits -#### General --| Quota | Free (F0)<sup>3</sup> | Standard (S0) | +| Quota | Free (F0) | Standard (S0) | |--|--|--|-| **Max number of transactions per certain time period** | | | -| Real-time API. Prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds | 200 transactions per second (TPS) (default value) | -| Adjustable | No<sup>4</sup> | Yes<sup>5</sup>, up to 1000 TPS | -| **HTTP-specific quotas** | | | +| Maximum number of transactions per time period for prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds<br/><br/>This limit isn't adjustable. | 200 transactions per second (TPS) (default value)<br/><br/>The rate is adjustable up to 1000 TPS for Standard (S0) resources. See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit). | | Max audio length produced per request | 10 min | 10 min | | Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |-| **Websocket specific quotas** | | | -| Max audio length produced per turn | 10 min | 10 min | -| Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 | -| Max SSML message size per turn | 64 KB | 64 KB | +| Max SSML message size per turn for websocket | 64 KB | 64 KB | #### Custom Neural Voice -| Quota | Free (F0)<sup>3</sup> | Standard (S0) | +| Quota | Free (F0)| Standard (S0) | |--|--|--|-| Max number of transactions per second (TPS) | Not available for F0 | See [General](#general) | +| Max number of transactions per second (TPS) | Not available for F0 | 200 transactions per second (TPS) (default value) | | Max number of datasets | N/A | 500 | | Max number of simultaneous dataset uploads | N/A | 5 | | Max data file size for data import per dataset | N/A | 2 GB | In the following tables, the parameters without the **Adjustable** row aren't ad | File size | 3,000 characters per file | 20,000 characters per file | | Export to audio library | 1 concurrent task | N/A | -<sup>3</sup> For the free (F0) pricing tier, see also the monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/> -<sup>4</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices) and [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling).<br/> -<sup>5</sup> See [additional explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#text-to-speech-increase-concurrent-request-limit).<br/> +### Speaker recognition quotas and limits per resource ++Speaker recognition is limited to 20 transactions per second (TPS). ## Detailed description, quota adjustment, and best practices +Some of the Speech service quotas are adjustable. This section provides additional explanations, best practices, and adjustment instructions. ++The following quotas are adjustable for Standard (S0) resources. The Free (F0) request limits aren't adjustable. ++- Speech-to-text [concurrent request limit](#online-transcription-and-speech-translation) for base model endpoint and custom endpoint +- Text-to-speech [maximum number of transactions per time period](#text-to-speech-quotas-and-limits-per-resource) for prebuilt neural voices and custom neural voices +- Speech translation [concurrent request limit](#online-transcription-and-speech-translation) + Before requesting a quota increase (where applicable), ensure that it's necessary. Speech service uses autoscaling technologies to bring the required computational resources in on-demand mode. At the same time, Speech service tries to keep your costs low by not maintaining an excessive amount of hardware capacity. Let's look at an example. Suppose that your application receives response code 429, which indicates that there are too many requests. Your application receives this response even though your workload is within the limits defined by the [Quotas and limits reference](#quotas-and-limits-reference). The most likely explanation is that Speech service is scaling up to your demand and didn't reach the required scale yet. Therefore the service doesn't immediately have enough resources to serve the request. In most cases, this throttled state is transient. The next sections describe specific cases of adjusting quotas. ### Speech-to-text: increase online transcription concurrent request limit -By default, the number of concurrent requests is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling. +By default, the number of concurrent speech-to-text [online transcription requests and speech translation requests](#online-transcription-and-speech-translation) combined is limited to 100 per resource in the base model, and 100 per custom endpoint in the custom model. For the standard pricing tier, you can increase this amount. Before submitting the request, ensure that you're familiar with the material discussed earlier in this article, such as the best practices to mitigate throttling. >[!NOTE]-> If you use custom models, be aware that one Speech service resource might be associated with many custom endpoints hosting many custom model deployments. Each custom endpoint has the default limit of concurrent requests (100) set by creation. If you need to adjust it, you need to make the adjustment of each custom endpoint *separately*. Note also that the value of the limit of concurrent requests for the base model of a resource has *no* effect to the custom endpoints associated with this resource. --Increasing the limit of concurrent requests doesn't directly affect your costs. Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests. +> Concurrent request limits for base and custom models need to be adjusted separately. You can have a Speech service resource that's associated with many custom endpoints hosting many custom model deployments. As needed, the limit adjustments per custom endpoint must be requested separately. -Concurrent request limits for base and custom models need to be adjusted separately. +Increasing the limit of concurrent requests doesn't directly affect your costs. The Speech service uses a payment model that requires that you pay only for what you use. The limit defines how high the service can scale before it starts throttle your requests. You aren't able to see the existing value of the concurrent request limit parameter in the Azure portal, the command-line tools, or API requests. To verify the existing value, create an Azure support request. |
cognitive-services | Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md | Below is a sample command to set file/directory ownership. ```bash sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ... ```+ ## Usage records When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST endpoint to generate a report about service usage. |
cognitive-services | Content Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md | keywords: # Content filtering -Azure OpenAI Service includes a content management system that works alongside core models to filter content. This system works by running both the input prompt and generated content through an ensemble of classification models aimed at detecting misuse. If the system identifies harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the finish_reason on the response will be `content_filter` to signify that some of the generation was filtered. -->[!NOTE] ->This content filtering system is temporarily turned off while we work on some improvements. The internal system is still annotating harmful content but the models will not block. Content filtering will be reactivated with the release of upcoming updates. If you would like to enable the content filters at any point before that, please open an [Azure support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). --You can generate content with the completions API using many different configurations that will alter the filtering behavior you should expect. The following section aims to enumerate all of these scenarios for you to appropriately design your solution. +Azure OpenAI Service includes a content management system that works alongside core models to filter content. This system works by running both the input prompt and generated content through an ensemble of classification models aimed at detecting misuse. If the system identifies harmful content, you'll receive either an error on the API call if the prompt was deemed inappropriate or the finish_reason on the response will be `content_filter` to signify that some of the generation was filtered. You can generate content with the completions API using many different configurations that will alter the filtering behavior you should expect. The following section aims to enumerate all of these scenarios for you to appropriately design your solution. To ensure you have properly mitigated risks in your application, you should evaluate all potential harms carefully, follow guidance in the [Transparency Note](https://go.microsoft.com/fwlink/?linkid=2200003) and add scenario-specific mitigation as needed. |
cognitive-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md | Title: Azure OpenAI Service models -description: Learn about the different models that are available in Azure OpenAI. +description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 06/24/2022 Last updated : 02/13/2023 keywords: # Azure OpenAI Service models -The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Please refer to the capability table at the bottom for a full breakdown. +Azure OpenAI provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Refer to the [model capability table](#model-capabilities) in this article for a full breakdown. | Model family | Description | |--|--| The service provides access to many different models, grouped by family and capa ## Model capabilities -Each model family has a series of models that are further distinguished by capability. These capabilities are typically identified by names, and the alphabetical order of these names generally signifies the relative capability and cost of that model within a given model family. For example, GPT-3 models use names such as Ada, Babbage, Curie, and Davinci to indicate relative capability and cost. Davinci is more capable (at a higher cost) than Curie, which in turn is more capable (at a higher cost) than Babbage, and so on. +Each model family has a series of models that are further distinguished by capability. These capabilities are typically identified by names, and the alphabetical order of these names generally signifies the relative capability and cost of that model within a given model family. For example, GPT-3 models use names such as Ada, Babbage, Curie, and Davinci to indicate relative capability and cost. Davinci is more capable and more expensive than Curie, which in turn is more capable and more expensive than Babbage, and so on. > [!NOTE] > Any task that can be performed by a less capable model like Ada can be performed by a more capable model like Curie or Davinci. ## Naming convention -Azure OpenAI's model names typically correspond to the following standard naming convention: +Azure OpenAI model names typically correspond to the following standard naming convention: `{family}-{capability}[-{input-type}]-{identifier}` Azure OpenAI's model names typically correspond to the following standard naming For example, our most powerful GPT-3 model is called `text-davinci-003`, while our most powerful Codex model is called `code-davinci-002`. -> Older versions of the GPT-3 models are available, named `ada`, `babbage`, `curie`, and `davinci`. These older models do not follow the standard naming conventions, and they are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md). +> The older versions of GPT-3 models named `ada`, `babbage`, `curie`, and `davinci` that don't follow the standard naming convention are primarily intended for fine tuning. For more information, see [Learn how to customize a model for your application](../how-to/fine-tuning.md). ## Finding what models are available -You can easily see the models you have available for both inference and fine-tuning in your resource by using the [Models API](/rest/api/cognitiveservices/azureopenaistable/models/list). +You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list). ## Finding the right model -We recommend starting with the most capable model in a model family because it's the best way to understand what the service is capable of. After you have an idea of what you want to accomplish, you can either stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities. +We recommend starting with the most capable model in a model family to confirm whether the model capabilities meet your requirements. Then you can stay with that model or move to a model with lower capability and cost, optimizing around that model's capabilities. ## GPT-3 models -The GPT-3 models can understand and generate natural language. The service offers four model capabilities, each with different levels of power and speed suitable for different tasks. Davinci is the most capable model, while Ada is the fastest. The following list represents the latest versions of GPT-3 models, ordered by increasing capability. +The GPT-3 models can understand and generate natural language. The service offers four model capabilities, each with different levels of power and speed suitable for different tasks. Davinci is the most capable model, while Ada is the fastest. In the order of greater to lesser capability, the models are: -- `text-ada-001`-- `text-babbage-001`-- `text-curie-001` - `text-davinci-003`+- `text-curie-001` +- `text-babbage-001` +- `text-ada-001` -While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it will produce the best results and validate the value our service can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application. +While Davinci is the most capable, the other models provide significant speed advantages. Our recommendation is for users to start with Davinci while experimenting, because it produces the best results and validate the value that Azure OpenAI can provide. Once you have a prototype working, you can then optimize your model choice with the best latency/performance balance for your application. ### <a id="gpt-3-davinci"></a>Davinci Ada is usually the fastest model and can perform tasks like parsing text, addres The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub. -TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. The following list represents the latest versions of Codex models, ordered by increasing capability. +TheyΓÇÖre most capable in Python and proficient in over a dozen languages, including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and Shell. In the order of greater to lesser capability, the Codex models are: -- `code-cushman-001` - `code-davinci-002`+- `code-cushman-001` ### <a id="codex-davinci"></a>Davinci -Similar to GPT-3, Davinci is the most capable Codex model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, Davinci produces the best results. These increased capabilities require more compute resources, so Davinci costs more and isn't as fast as other models. +Similar to GPT-3, Davinci is the most capable Codex model and can perform any task the other models can perform, often with less instruction. For applications requiring deep understanding of the content, Davinci produces the best results. Greater capabilities require more compute resources, so Davinci costs more and isn't as fast as other models. ### Cushman Similar to text search embedding models, there are two input types supported by ||| | Code search and relevance | `code-search-ada-code-001` <br> `code-search-ada-text-001` <br> `code-search-babbage-code-001` <br> `code-search-babbage-text-001` | -When using our Embeddings models, keep in mind their limitations and risks. +When using our embeddings models, keep in mind their limitations and risks. ## Model Summary table and region availability ### GPT-3 Models-| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | +| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | | | | | | |-| Ada | Yes | No | N/A | East US, South Central US, West Europe | -| Text-Ada-001 | Yes | No | East US, South Central US, West Europe | N/A | -| Babbage | Yes | No | N/A | East US, South Central US, West Europe | -| Text-Babbage-001 | Yes | No | East US, South Central US, West Europe | N/A | -| Curie | Yes | No | N/A | East US, South Central US, West Europe | -| Text-curie-001 | Yes | No | East US, South Central US, West Europe | N/A | -| Davinci* | Yes | No | N/A | East US, South Central US, West Europe | -| Text-davinci-001 | Yes | No | South Central US, West Europe | N/A | -| Text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A | -| Text-davinci-003 | Yes | No | East US | N/A | -| Text-davinci-fine-tune-002* | Yes | No | N/A | East US, West Europe | --\*Models available by request only. We are currently unable to onboard new customers at this time. +| ada | Yes | No | N/A | East US, South Central US, West Europe | +| text-ada-001 | Yes | No | East US, South Central US, West Europe | N/A | +| babbage | Yes | No | N/A | East US, South Central US, West Europe | +| text-babbage-001 | Yes | No | East US, South Central US, West Europe | N/A | +| curie | Yes | No | N/A | East US, South Central US, West Europe | +| text-curie-001 | Yes | No | East US, South Central US, West Europe | N/A | +| davinci<sup>1</sup> | Yes | No | N/A | East US, South Central US, West Europe | +| text-davinci-001 | Yes | No | South Central US, West Europe | N/A | +| text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A | +| text-davinci-003 | Yes | No | East US | N/A | +| text-davinci-fine-tune-002<sup>1</sup> | Yes | No | N/A | East US, West Europe | ++<sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model. ### Codex Models-| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | +| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | | | | | | |-| Code-Cushman-001* | Yes | No | South Central US, West Europe | East US, South Central US, West Europe | -| Code-Davinci-002 | Yes | No | East US, West Europe | N/A | -| Code-Davinci-Fine-tune-002* | Yes | No | N/A | East US, West Europe | --\*Models available for Fine-tuning by request only. We are currently unable to enable new cusetomers at this time. -+| code-cushman-001<sup>2</sup> | Yes | No | South Central US, West Europe | East US, South Central US, West Europe | +| code-davinci-002 | Yes | No | East US, West Europe | N/A | +| code-davinci-fine-tune-002<sup>2</sup> | Yes | No | N/A | East US, West Europe | +<sup>2</sup> The model is available for fine-tuning by request only. Currently we aren't accepting new requests to fine-tune the model. ### Embeddings Models-| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | +| Model ID | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions | | | | | | | | text-ada-embeddings-002 | No | Yes | East US, South Central US, West Europe | N/A | | text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A | When using our Embeddings models, keep in mind their limitations and risks. | code-search-babbage-code-001 | No | Yes | South Central US, West Europe | N/A | | code-search-babbage-text-001 | No | Yes | South Central US, West Europe | N/A | - ## Next steps -[Learn more about Azure OpenAI](../overview.md). +[Learn more about Azure OpenAI](../overview.md) |
communication-services | Teams User Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md | |
communication-services | Meeting Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md | |
communication-services | Phone Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md | |
communication-services | Teams Interop Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing/teams-interop-pricing.md | |
communication-services | Manage Calls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/cte-calling-sdk/manage-calls.md | description: Use Azure Communication Services SDKs to manage calls for Teams use -+ Last updated 12/01/2021 |
communication-services | Meeting Interop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/meeting-interop.md | |
communication-services | Access Token Teams External Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/identity/access-token-teams-external-users.md | |
communication-services | Manage Teams Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md | |
communication-services | Get Started Teams Interop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-interop.md | |
communication-services | Get Started With Voice Video Calling Custom Teams Client | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md | |
communication-services | Virtual Visits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md | |
communications-gateway | Interoperability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability.md | Azure Communications Gateway provides all the features of a traditional session - Defending against Denial of Service attacks and other malicious traffic - Ensuring Quality of Service -Azure Communications Gateway also offers dashboards that you can use to monitor key metrics of your deployment. +Azure Communications Gateway also offers metrics for monitoring your deployment. You must provide the networking connection between Azure Communications Gateway and your core networks. For Teams Phone Mobile, you must also provide a network element that can route calls into the Microsoft Phone System for call anchoring. For full details of the media interworking features available in Azure Communica ## Compatibility with monitoring requirements -The Azure Communications Gateway service includes continuous monitoring for potential faults in your deployment. The metrics we monitor cover all metrics required to be monitored by Operators as part of the Operator Connect program and include: +The Azure Communications Gateway service includes continuous monitoring for potential faults in your deployment. The metrics we monitor cover all metrics that operators must monitor as part of the Operator Connect program and include: - Call quality - Call errors and unusual behavior (for example, call setup failures, short calls, or unusual disconnections) |
cosmos-db | Analytical Store Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md | WITH (num varchar(100)) AS [IntToFloat] * Spark pools in Azure Synapse will represent these columns as `undefined`. * SQL serverless pools in Azure Synapse will represent these columns as `NULL`. -##### Representation challenges Workaround +##### Representation challenges workarounds -Currently the base schema can't be reset and It is possible that an old document, with an incorrect schema, was used to create that base schema. To delete or update the problematic documents won't help. The possible solutions are: +It is possible that an old document, with an incorrect schema, was used to create your container's analytical store base schema. Based on all the rules presented above, you may be receiving `NULL` for certain properties when querying your analytical store using Azure Synapse Link. To delete or update the problematic documents won't help because base schema reset isn't currently supported. The possible solutions are: * To migrate the data to a new container, making sure that all documents have the correct schema.- * To abandon the property with the wrong schema and add a new one, with another name, that has the correct datatypes. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have **NULL**. You can add the **status2** property to all documents and start to use it, instead of the original property. + * To abandon the property with the wrong schema and add a new one, with another name, that has the correct schema in all documents. Example: You have billions of documents in the **Orders** container where the **status** property is a string. But the first document in that container has **status** defined with integer. So, one document will have **status** correctly represented and all other documents will have `NULL`. You can add the **status2** property to all documents and start to use it, instead of the original property. #### Full fidelity schema representation |
cosmos-db | High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md | Given the internal Azure Cosmos DB architecture, using multiple write regions do When an Azure Cosmos DB account is configured with multi-region writes, one of the regions will act as an arbiter in case of write conflicts. When such conflicts happen, they're routed to this region for consistent resolution. +#### Best practices for multi-region writes ++Here are some best practices to consider when writing to multiple regions. ++#### Keep local traffic local ++When you use multi-region writes, the application should issue read and write traffic originating in the local region, strictly to the local Cosmos DB region. You must avoid cross-region calls for optimal performance. ++It's important for the application to minimize conflicts by avoiding the following anti-patterns: +* Sending the same write operation to all regions to hedge bets on response times from the fastest region. ++* Randomly determining the target region for a read or write operation on a per request basis. ++* Using a Round Robin policy to determine the target region for a read or write operation on a per request basis. ++#### Avoid dependency on replication lag +Multi-region write accounts can't be configured for Strong Consistency. Thus, the region being written to responds immediately after replicating the data locally while asynchronously replicating the data globally. ++While infrequent, a replication lag may occur on one or a few partitions when geo-replicating data. Replication lag can occur due to rare blips in network traffic or higher than usual rates of conflict resolution. ++For instance, an architecture in which the application writes to Region A but reads from Region B introduces a dependency on replication lag between the two regions. However, if the application reads and writes to the same region, performance remains constant even in the presence of replication lag. ++#### Session Consistency Usage for Write operations +In Session Consistency, the session token is used for both read and write operations. ++For read operations, the cached session token is sent to the server with a guarantee of receiving data corresponding to the specified (or a more recent) session token. ++For write operations, the session token is sent to the database with a guarantee of persisting the data only if the server has caught up to the session token provided. In single-region write accounts, the write region is always guaranteed to have caught up to the session token. However, in multi-region write accounts, the region you write to may not have caught up to writes issued to another region. If the client writes to Region A with a session token from Region B, Region A won't be able to persist the data until it has caught up to changes made in Region B. ++It's best to use session tokens only for read operations and not for write operations when passing session tokens between client instances. ++#### Rapid updates to the same document +The server's updates to resolve or confirm the absence of conflicts can collide with writes triggered by the application when the same document is repeatedly updated. Repeated updates in rapid succession to the same document experience higher latencies during conflict resolution. While occasional bursts in repeated updates to the same document are inevitable, it would be worth exploring an architecture where new documents are created instead if steady state traffic sees rapid updates to the same document over an extended period. + ### What to expect during a region outage Client of single-region accounts will experience loss of read and write availability until service is restored. -Multi-region accounts will experience different behaviors depending on the following table. +Multi-region accounts experience different behaviors depending on the following table. | Configuration | Outage | Availability impact | Durability impact| What to do | | -- | -- | -- | -- | -- |-| Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency which loses write availability until the service is restored or, if **service-managed failover** is enabled, the region is marked as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. | -| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-managed failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). | -| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Azure Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for API for NoSQL accounts, and Last Write Wins for accounts using other APIs. | +| Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency, which loses write availability until restoration of the service or, if you enable **service-managed failover**, the service marks the region as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> When the outage is over, readjust provisioned RUs as appropriate. | +| Single write region | Write region outage | Clients will redirect reads to other regions. <br/> **Without service-managed failover**, clients experience write availability loss, until restoration of write availability occurs automatically when the outage ends. <br/> **With service-managed failover** clients experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If you haven't selected the strong consistency level, the service may not replicate some data to the remaining active regions. This replication depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, you could lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> *Don't* trigger a manual failover during the outage, as it can't succeed. <br/> When the outage is over, readjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). | +| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, you may lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/> When the outage is over, you may readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB automatically recovers non-replicated data in the failed region. This automatic recovery uses the configured conflict resolution method for API for NoSQL accounts. For accounts other APIs, this automatic recovery uses *Last Write Wins*. | ### Additional information on read region outages Multi-region accounts will experience different behaviors depending on the follo * If none of the regions in the preferred region list is available, calls automatically fall back to the current write region. -* No changes are required in your application code to handle read region outage. When the impacted read region is back online it will automatically sync with the current write region and will be available again to serve read requests. +* No changes are required in your application code to handle read region outage. When the impacted read region is back online, it will automatically sync with the current write region and will be available again to serve read requests. * Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, read consistency guarantees continue to be honored by Azure Cosmos DB. Multi-region accounts will experience different behaviors depending on the follo ### Additional information on write region outages -* During a write region outage, the Azure Cosmos DB account will automatically promote a secondary region to be the new primary write region when **automatic (service-managed) failover** is configured on the Azure Cosmos DB account. The failover will occur to another region in the order of region priority you've specified. +* During a write region outage, the Azure Cosmos DB account will automatically promote a secondary region to be the new primary write region when **automatic (service-managed) failover** is configured on the Azure Cosmos DB account. The failover occurs to another region in the order of region priority you've specified. -* Note that manual failover shouldn't be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure which requires connectivity between the regions. +* Manual failover shouldn't be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure, which requires connectivity between the regions. * When the previously impacted region is back online, any write data that wasn't replicated when the region failed, is made available through the [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflicts feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos DB container as appropriate. The following table summarizes the high availability capability of various accou |Read availability SLA | 99.99% | 99.995% | 99.999% | 99.999% | 99.999% | |Zone failures ΓÇô data loss | Data loss | No data loss | No data loss | No data loss | No data loss | |Zone failures ΓÇô availability | Availability loss | No availability loss | No availability loss | No availability loss | No availability loss |-|Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. | Dependent on consistency level. See [Consistency, availability, and performance tradeoffs](./consistency-levels.md) for more information. +|Regional outage ΓÇô data loss | Data loss | Data loss | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md). | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md). | Dependent on consistency level. For more information, see [Consistency, availability, and performance tradeoffs](./consistency-levels.md). |Regional outage ΓÇô availability | Availability loss | Availability loss | No availability loss for read region failure, temporary for write region failure | No availability loss for read region failure, temporary for write region failure | No read availability loss, temporary write availability loss in the affected region | |Price (***1***) | N/A | Provisioned RU/s x 1.25 rate | Provisioned RU/s x n regions | Provisioned RU/s x 1.25 rate x n regions (***2***) | Multi-region write rate x n regions | Multi-region accounts will experience different behaviors depending on the follo | Write regions | Service-Managed failover | What to expect | What to do | | -- | -- | -- | -- |-| Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Azure Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. | -| Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Azure Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). | -| Multiple write regions | Not applicable | Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Azure Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for API for NoSQL accounts, and Last Write Wins for accounts using other APIs. | +| Single write region | Not enabled | If there was an outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can affect write availability if fewer than two read regions remaining.<br/> If there was an outage in the write region, clients experience write availability loss. If you haven't selected the strong consistency level, the service may not replicate some data to the remaining active regions. This replication depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, you may lose unreplicated data. <br/> Azure Cosmos DB restores write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> *Don't* trigger a manual failover during the outage, as it can't succeed. <br/> When the outage is over, readjust provisioned RUs as appropriate. | +| Single write region | Enabled | If there was an outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can affect write availability if fewer than two read regions remaining.<br/> If there was an outage in the write region, clients experience write availability loss until Azure Cosmos DB automatically elects a new region as the new write region according to your preferences. If you haven't selected the strong consistency level, the service may not replicate some data to the remaining active regions. This replication depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, you may lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <br/> *Don't* trigger a manual failover during the outage, as it can't succeed. <br/> When the outage is over, you may move the write region back to the original region, and readjust provisioned RUs as appropriate. Accounts using API for NoSQLs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). | +| Multiple write regions | Not applicable | Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15 mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, you may lose unreplicated data. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support more traffic. <br/> When the outage is over, you may readjust provisioned RUs as appropriate. If possible, Azure Cosmos DB automatically recovers non-replicated data in the failed region. This automatic recovery uses the configured conflict resolution method for API for NoSQL accounts. For accounts other APIs, this automatic recovery uses *Last Write Wins*. | ## Next steps |
cosmos-db | Performance Tips Query Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md | To execute a query, a query plan needs to be built. This in general represents a ### Use Query Plan caching -The query plan, for a query scoped to a single partition, is cached on the client. This eliminates the need to make a call to the gateway to retrieve the query plan after the first call. The key for the cached query plan is the SQL query string. You need to **make sure the query is [parametrized](query/parameterized-queries.md)**. If not, the query plan cache lookup will often be a cache miss as the query string is unlikely to be identical across calls. Query plan caching is **enabled by default for Java SDK version 4.20.0 and above** and **for Spring Datan Azure Cosmos DB SDK version 3.13.0 and above**. +The query plan, for a query scoped to a single partition, is cached on the client. This eliminates the need to make a call to the gateway to retrieve the query plan after the first call. The key for the cached query plan is the SQL query string. You need to **make sure the query is [parametrized](query/parameterized-queries.md)**. If not, the query plan cache lookup will often be a cache miss as the query string is unlikely to be identical across calls. Query plan caching is **enabled by default for Java SDK version 4.20.0 and above** and **for Spring Data Azure Cosmos DB SDK version 3.13.0 and above**. ### Use parametrized single partition queries |
cosmos-db | Quickstart Java Spring Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java-spring-data.md | Title: Quickstart - Use Spring Datan Azure Cosmos DB v3 to create a document database using Azure Cosmos DB -description: This quickstart presents a Spring Datan Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB for NoSQL + Title: Quickstart - Use Spring Data Azure Cosmos DB v3 to create a document database using Azure Cosmos DB +description: This quickstart presents a Spring Data Azure Cosmos DB v3 code sample you can use to connect to and query the Azure Cosmos DB for NoSQL ms.devlang: java Previously updated : 08/26/2021 Last updated : 02/22/2023 -# Quickstart: Build a Spring Datan Azure Cosmos DB v3 app to manage Azure Cosmos DB for NoSQL data +# Quickstart: Build a Spring Data Azure Cosmos DB v3 app to manage Azure Cosmos DB for NoSQL data + [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!div class="op_single_selector"]-> -In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and by using a Spring Datan Azure Cosmos DB v3 app cloned from GitHub. First, you create an Azure Cosmos DB for NoSQL account using the Azure portal or without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb), then create a Spring Boot app using the Spring Datan Azure Cosmos DB v3 connector, and then add resources to your Azure Cosmos DB account by using the Spring Boot application. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. +In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account from the Azure portal, and by using a Spring Data Azure Cosmos DB v3 app cloned from GitHub. ++First, you create an Azure Cosmos DB for NoSQL account using the Azure portal. Alternately, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). You can then create a Spring Boot app using the Spring Data Azure Cosmos DB v3 connector, and then add resources to your Azure Cosmos DB account by using the Spring Boot application. ++Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities. -> [!IMPORTANT] -> These release notes are for version 3 of Spring Datan Azure Cosmos DB. You can find [release notes for version 2 here](sdk-java-spring-data-v2.md). +> [!IMPORTANT] +> These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find release notes for version 2 at [Spring Data Azure Cosmos DB v2 for API for NoSQL (legacy): Release notes and resources](sdk-java-spring-data-v2.md). >-> Spring Datan Azure Cosmos DB supports only the API for NoSQL. +> Spring Data Azure Cosmos DB supports only the API for NoSQL. +> +> See the following articles for information about Spring Data on other Azure Cosmos DB APIs: >-> See these articles for information about Spring Data on other Azure Cosmos DB APIs: > * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db) > * [Spring Data MongoDB with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-mongodb-with-cosmos-db)-> ## Prerequisites -- An Azure account with an active subscription.- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. -- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.-- A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven.-- [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git.+* An Azure account with an active subscription. + * No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required. +* [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Set the `JAVA_HOME` environment variable to the JDK install folder. +* A [Maven binary archive](https://maven.apache.org/download.cgi). On Ubuntu, run `apt-get install maven` to install Maven. +* [Git](https://www.git-scm.com/downloads). On Ubuntu, run `sudo apt-get install git` to install Git. ## Introductory notes -*The structure of an Azure Cosmos DB account.* Irrespective of API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the diagram below: +*The structure of an Azure Cosmos DB account.* Irrespective of API or programming language, an Azure Cosmos DB *account* contains zero or more *databases*, a *database* (DB) contains zero or more *containers*, and a *container* contains zero or more items, as shown in the following diagram: :::image type="content" source="../media/account-databases-containers-items/cosmos-entities.png" alt-text="Azure Cosmos DB account entities" border="false"::: -You may read more about databases, containers and items [here.](../account-databases-containers-items.md) A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*. +For more information about databases, containers, and items, see [Azure Cosmos DB resource model](../account-databases-containers-items.md). A few important properties are defined at the level of the container, among them *provisioned throughput* and *partition key*. ++The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. You can select provisioned throughput at per-container granularity or per-database granularity. However, you should prefer container-level throughput specification. For more information, see [Introduction to provisioned throughput in Azure Cosmos DB](../set-throughput.md). -The provisioned throughput is measured in Request Units (*RUs*) which have a monetary price and are a substantial determining factor in the operating cost of the account. Provisioned throughput can be selected at per-container granularity or per-database granularity, however container-level throughput specification is typically preferred. You may read more about throughput provisioning [here.](../set-throughput.md) +As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*. You must choose one field in your documents to be the partition key, which maps each document to a partition. -As items are inserted into an Azure Cosmos DB container, the database grows horizontally by adding more storage and compute to handle requests. Storage and compute capacity are added in discrete units known as *partitions*, and you must choose one field in your documents to be the partition key which maps each document to a partition. The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values; therefore you are advised to choose a partition key which is relatively random or evenly-distributed. Otherwise, some partitions will see substantially more requests (*hot partition*) while other partitions see substantially fewer requests (*cold partition*), and this is to be avoided. You may learn more about partitioning [here](../partitioning-overview.md). +The way partitions are managed is that each partition is assigned a roughly equal slice out of the range of partition key values. For this reason, you should choose a partition key that's relatively random or evenly distributed. Otherwise, you get *hot partitions* and *cold partitions*, which see substantially more or fewer requests. For information on avoiding this condition, see [Partitioning and horizontal scaling in Azure Cosmos DB](../partitioning-overview.md). ## Create a database account -Before you can create a document database, you need to create a API for NoSQL account with Azure Cosmos DB. +Before you can create a document database, you need to create an API for NoSQL account with Azure Cosmos DB. [!INCLUDE [cosmos-db-create-dbaccount](../includes/cosmos-db-create-dbaccount.md)] Before you can create a document database, you need to create a API for NoSQL ac ## Clone the sample application -Now let's switch to working with code. Let's clone a API for NoSQL app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically. +Now let's switch to working with code. Let's clone an API for NoSQL app from GitHub, set the connection string, and run it. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer. ```bash-git clone https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started.git +git clone https://github.com/Azure-Samples/azure-spring-boot-samples.git ``` ## Review the code -This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app -](#run-the-app). +This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Run the app](#run-the-app). ++### [Passwordless (Recommended)](#tab/passwordless) ++In this section, the configurations and the code don't have any authentication operations. However, connecting to Azure service requires authentication. To complete the authentication, you need to use Azure Identity. Spring Cloud Azure uses `DefaultAzureCredential`, which Azure Identity provides to help you get credentials without any code changes. ++`DefaultAzureCredential` supports multiple authentication methods and determines which method to use at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. For more information, see the [Default Azure credential](/azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential) section of [Authenticate Azure-hosted Java applications](/azure/developer/java/sdk/identity-azure-hosted-auth). +++### Authenticate using DefaultAzureCredential +++You can authenticate to Cosmos DB for NoSQL using `DefaultAzureCredential` by adding the `azure-identity` [dependency](https://mvnrepository.com/artifact/com.azure/azure-identity) to your application. `DefaultAzureCredential` automatically discovers and uses the account you signed in with in the previous step. ### Application configuration file -Here we showcase how Spring Boot and Spring Data enhance user experience - the process of establishing an Azure Cosmos DB client and connecting to Azure Cosmos DB resources is now config rather than code. At application startup Spring Boot handles all of this boilerplate using the settings in **application.properties**: +Configure the Azure Database for MySQL credentials in the *application.yml* configuration file in the *cosmos/spring-cloud-azure-starter-data-cosmos/spring-cloud-azure-data-cosmos-sample* directory. Replace the values of `${AZURE_COSMOS_ENDPOINT}` and `${COSMOS_DATABASE}`. ++```yaml +spring: + cloud: + azure: + cosmos: + endpoint: ${AZURE_COSMOS_ENDPOINT} + database: ${COSMOS_DATABASE} +``` ++After Spring Boot and Spring Data create the Azure Cosmos DB account, database, and container, they connect to the database and container for `delete`, `add`, and `find` operations. ++### [Password](#tab/password) ++### Application configuration file -```xml -cosmos.uri=${ACCOUNT_HOST} -cosmos.key=${ACCOUNT_KEY} -cosmos.secondaryKey=${SECONDARY_ACCOUNT_KEY} +The following section shows how Spring Boot and Spring Data use configuration instead of code to establish an Azure Cosmos DB client and connect to Azure Cosmos DB resources. At application startup Spring Boot handles all of this boilerplate using the following settings in *application.yml*: -dynamic.collection.name=spel-property-collection -# Populate query metrics -cosmos.queryMetricsEnabled=true +```yaml +spring: + cloud: + azure: + cosmos: + key: ${AZURE_COSMOS_KEY} + endpoint: ${AZURE_COSMOS_ENDPOINT} + database: ${COSMOS_DATABASE} ``` -Once you create an Azure Cosmos DB account, database, and container, just fill-in-the-blanks in the config file and Spring Boot/Spring Data will automatically do the following: (1) create an underlying Java SDK `CosmosClient` instance with the URI and key, and (2) connect to the database and container. You're all set - **no more resource management code!** +Once you create an Azure Cosmos DB account, database, and container, just fill-in-the-blanks in the config file and Spring Boot/Spring Data does the following: (1) creates an underlying Java SDK `CosmosClient` instance with the URI and key, and (2) connects to the database and container. You're all set - no more resource management code! ++ ### Java source -The Spring Data value-add also comes from its simple, clean, standardized and platform-independent interface for operating on datastores. Building on the Spring Data GitHub sample linked above, below are CRUD and query samples for manipulating Azure Cosmos DB documents with Spring Datan Azure Cosmos DB. +Spring Data provides a simple, clean, standardized, and platform-independent interface for operating on datastores, as shown in the following examples. These CRUD and query examples enable you to manipulate Azure Cosmos DB documents by using Spring Data Azure Cosmos DB. These examples build on the Spring Data GitHub sample linked to earlier in this article. * Item creation and updates by using the `save` method. - [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Create)] - -* Point-reads using the derived query method defined in the repository. The `findByIdAndLastName` performs point-reads for `UserRepository`. The fields mentioned in the method name cause Spring Data to execute a point-read defined by the `id` and `lastName` fields: + ```java + // Save the User class to Azure Cosmos DB database. + final Mono<User> saveUserMono = repository.save(testUser); + ``` - [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Read)] +* Point-reads using the derived query method defined in the repository. The `findById` performs point-reads for `repository`. The fields mentioned in the method name cause Spring Data to execute a point-read defined by the `id` field: ++ ```java + // Nothing happens until we subscribe to these Monos. + // findById will not return the user as user is not present. + final Mono<User> findByIdMono = repository.findById(testUser.getId()); + final User findByIdUser = findByIdMono.block(); + Assert.isNull(findByIdUser, "User must be null"); + ``` * Item deletes using `deleteAll`: - [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Delete)] + ```java + repository.deleteAll().block(); + LOGGER.info("Deleted all data in container."); + ``` -* Derived query based on repository method name. Spring Data implements the `UserRepository` `findByFirstName` method as a Java SDK SQL query on the `firstName` field (this query could not be implemented as a point-read): +* Derived query based on repository method name. Spring Data implements the `repository` `findByFirstName` method as a Java SDK SQL query on the `firstName` field. You can't implement this query as a point-read. - [!code-java[](~/spring-data-azure-cosmos-db-sql-tutorial/azure-spring-data-cosmos-java-getting-started/src/main/java/com/azure/spring/data/cosmostutorial/SampleApplication.java?name=Query)] + ```java + final Flux<User> firstNameUserFlux = repository.findByFirstName("testFirstName"); + ``` ## Run the app -Now go back to the Azure portal to get your connection string information and launch the app with your endpoint information. This enables your app to communicate with your hosted database. +Now go back to the Azure portal to get your connection string information. Then, use the following steps to launch the app with your endpoint information so your app can communicate with your hosted database. ++1. In the Git terminal window, `cd` to the sample code folder. -1. In the git terminal window, `cd` to the sample code folder. + ```bash + cd azure-spring-boot-samples/cosmos/spring-cloud-azure-starter-data-cosmos/spring-cloud-azure-data-cosmos-sample + ``` - ```bash - cd azure-spring-data-cosmos-java-sql-api-getting-started/azure-spring-data-cosmos-java-getting-started/ - ``` +1. In the Git terminal window, use the following command to install the required Spring Data Azure Cosmos DB packages. -2. In the git terminal window, use the following command to install the required Spring Datan Azure Cosmos DB packages. + ```bash + mvn clean package + ``` - ```bash - mvn clean package - ``` +1. In the Git terminal window, use the following command to start the Spring Data Azure Cosmos DB application: -3. In the git terminal window, use the following command to start the Spring Datan Azure Cosmos DB application: + ```bash + mvn spring-boot:run + ``` - ```bash - mvn spring-boot:run - ``` - -4. The app loads **application.properties** and connects the resources in your Azure Cosmos DB account. -5. The app will perform point CRUD operations described above. -6. The app will perform a derived query. -7. The app doesn't delete your resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account if you want to avoid incurring charges. +1. The app loads *application.yml* and connects the resources in your Azure Cosmos DB account. +1. The app performs point CRUD operations described previously. +1. The app performs a derived query. +1. The app doesn't delete your resources. Switch back to the portal to [clean up the resources](#clean-up-resources) from your account if you want to avoid incurring charges. ## Review SLAs in the Azure portal Now go back to the Azure portal to get your connection string information and la ## Next steps -In this quickstart, you've learned how to create an Azure Cosmos DB for NoSQL account, create a document database and container using the Data Explorer, and run a Spring Data app to do the same thing programmatically. You can now import additional data into your Azure Cosmos DB account. +In this quickstart, you learned how to create an Azure Cosmos DB for NoSQL account and create a document database and container using the Data Explorer. You then ran a Spring Data app to do the same thing programmatically. You can now import more data into your Azure Cosmos DB account. Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.-* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) ++* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) |
cosmos-db | Samples Java Spring Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-java-spring-data.md | -# Azure Cosmos DB for NoSQL: Spring Datan Azure Cosmos DB v3 examples +# Azure Cosmos DB for NoSQL: Spring Data Azure Cosmos DB v3 examples [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] > [!div class="op_single_selector"]-> These release notes are for version 3 of Spring Datan Azure Cosmos DB. You can find [release notes for version 2 here](sdk-java-spring-data-v2.md). +> These release notes are for version 3 of Spring Data Azure Cosmos DB. You can find [release notes for version 2 here](sdk-java-spring-data-v2.md). >-> Spring Datan Azure Cosmos DB supports only the API for NoSQL. +> Spring Data Azure Cosmos DB supports only the API for NoSQL. > > See these articles for information about Spring Data on other Azure Cosmos DB APIs: > * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db)-* Links to the tasks in each of the example Spring Datan Azure Cosmos DB project files. +* Links to the tasks in each of the example Spring Data Azure Cosmos DB project files. * Links to the related API reference content. **Prerequisites** The latest sample applications that perform CRUD operations and other common ope You need the following to run this sample application: * Java Development Kit 8-* Spring Datan Azure Cosmos DB v3 +* Spring Data Azure Cosmos DB v3 -You can optionally use Maven to get the latest Spring Datan Azure Cosmos DB v3 binaries for use in your project. Maven automatically adds any necessary dependencies. Otherwise, you can directly download the dependencies listed in the **pom.xml** file and add them to your build path. +You can optionally use Maven to get the latest Spring Data Azure Cosmos DB v3 binaries for use in your project. Maven automatically adds any necessary dependencies. Otherwise, you can directly download the dependencies listed in the **pom.xml** file and add them to your build path. ```bash <dependency> |
cosmos-db | Sdk Java Spring Data V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v2.md | Title: 'Spring Datan Azure Cosmos DB v2 for API for NoSQL release notes and resources' -description: Learn about the Spring Datan Azure Cosmos DB v2 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK. + Title: 'Spring Data Azure Cosmos DB v2 for API for NoSQL release notes and resources' +description: Learn about the Spring Data Azure Cosmos DB v2 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK. -# Spring Datan Azure Cosmos DB v2 for API for NoSQL (legacy): Release notes and resources +# Spring Data Azure Cosmos DB v2 for API for NoSQL (legacy): Release notes and resources [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] [!INCLUDE[SDK selector](../includes/cosmos-db-sdk-list.md)] - Spring Datan Azure Cosmos DB version 2 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Datan Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact. + Spring Data Azure Cosmos DB version 2 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact. > [!WARNING]-> This version of Spring Datan Azure Cosmos DB SDK depends on a retired version of Azure Cosmos DB Java SDK. This Spring Datan Azure Cosmos DB SDK will be announced as retiring in the near future! This is *not* the latest Azure Spring Datan Azure Cosmos DB SDK for Azure Cosmos DB and is outdated. Because of performance issues and instability in Azure Spring Datan Azure Cosmos DB SDK V2, we highly recommend to use [Azure Spring Datan Azure Cosmos DB v3](sdk-java-spring-data-v3.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide to understand the difference in the underlying Java SDK V4. +> This version of Spring Data Azure Cosmos DB SDK depends on a retired version of Azure Cosmos DB Java SDK. This Spring Data Azure Cosmos DB SDK will be announced as retiring in the near future! This is *not* the latest Azure Spring Data Azure Cosmos DB SDK for Azure Cosmos DB and is outdated. Because of performance issues and instability in Azure Spring Data Azure Cosmos DB SDK V2, we highly recommend to use [Azure Spring Data Azure Cosmos DB v3](sdk-java-spring-data-v3.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide to understand the difference in the underlying Java SDK V4. > The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application. -You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/). +You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/). > [!IMPORTANT] -> These release notes are for version 2 of Spring Datan Azure Cosmos DB. You can find [release notes for version 3 here](sdk-java-spring-data-v3.md). +> These release notes are for version 2 of Spring Data Azure Cosmos DB. You can find [release notes for version 3 here](sdk-java-spring-data-v3.md). >-> Spring Datan Azure Cosmos DB supports only the API for NoSQL. +> Spring Data Azure Cosmos DB supports only the API for NoSQL. > > See the following articles for information about Spring Data on other Azure Cosmos DB APIs: > * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db) You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure S > > Want to get going fast? > 1. Install the [minimum supported Java runtime, JDK 8](/java/azure/jdk/), so you can use the SDK.-> 2. Create a Spring Datan Azure Cosmos DB app by using the [starter](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). It's easy! -> 3. Work through the [Spring Datan Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb), which walks through basic Azure Cosmos DB requests. +> 2. Create a Spring Data Azure Cosmos DB app by using the [starter](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). It's easy! +> 3. Work through the [Spring Data Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb), which walks through basic Azure Cosmos DB requests. > > You can spin up Spring Boot Starter apps fast by using [Spring Initializr](https://start.spring.io/)! > You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure S | Resource | Link | ||| | **SDK download** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure/spring-data-cosmosdb) |-|**API documentation** | [Spring Datan Azure Cosmos DB reference documentation]() | -|**Contribute to the SDK** | [Spring Datan Azure Cosmos DB repo on GitHub](https://github.com/microsoft/spring-data-cosmosdb) | +|**API documentation** | [Spring Data Azure Cosmos DB reference documentation]() | +|**Contribute to the SDK** | [Spring Data Azure Cosmos DB repo on GitHub](https://github.com/microsoft/spring-data-cosmosdb) | |**Spring Boot Starter**| [Azure Cosmos DB Spring Boot Starter client library for Java](https://github.com/MicrosoftDocs/azure-dev-docs/blob/master/articles/jav) | |**Spring TODO app sample with Azure Cosmos DB**| [End-to-end Java Experience in App Service Linux (Part 2)](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2) |-|**Developer's guide** | [Spring Datan Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb) | +|**Developer's guide** | [Spring Data Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb) | |**Using Starter** | [How to use Spring Boot Starter with the Azure Cosmos DB for NoSQL](/azure/developer/jav) | |**Sample with Azure App Service** | [How to use Spring and Azure Cosmos DB with App Service on Linux](/azure/developer/java/spring-framework/configure-spring-app-with-cosmos-db-on-app-service-linux) <br> [TODO app sample](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2.git) | |
cosmos-db | Sdk Java Spring Data V3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v3.md | Title: 'Spring Datan Azure Cosmos DB v3 for API for NoSQL release notes and resources' -description: Learn about the Spring Datan Azure Cosmos DB v3 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK. + Title: 'Spring Data Azure Cosmos DB v3 for API for NoSQL release notes and resources' +description: Learn about the Spring Data Azure Cosmos DB v3 for API for NoSQL, including release dates, retirement dates, and changes made between each version of the Azure Cosmos DB SQL Async Java SDK. -# Spring Datan Azure Cosmos DB v3 for API for NoSQL: Release notes and resources +# Spring Data Azure Cosmos DB v3 for API for NoSQL: Release notes and resources [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)] [!INCLUDE[SDK selector](../includes/cosmos-db-sdk-list.md)] -The Spring Datan Azure Cosmos DB version 3 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Datan Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact. +The Spring Data Azure Cosmos DB version 3 for NoSQL allows developers to use Azure Cosmos DB in Spring applications. Spring Data Azure Cosmos DB exposes the Spring Data interface for manipulating databases and collections, working with documents, and issuing queries. Both Sync and Async (Reactive) APIs are supported in the same Maven artifact. The [Spring Framework](https://spring.io/projects/spring-framework) is a programming and configuration model that streamlines Java application development. Spring streamlines the "plumbing" of applications by using dependency injection. Many developers like Spring because it makes building and testing applications more straightforward. [Spring Boot](https://spring.io/projects/spring-boot) extends this handling of the plumbing with an eye toward web application and microservices development. [Spring Data](https://spring.io/projects/spring-data) is a programming model and framework for accessing datastores like Azure Cosmos DB from the context of a Spring or Spring Boot application. -You can use Spring Datan Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/). +You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Spring Apps](https://azure.microsoft.com/services/spring-apps/). ## Version Support Policy This project supports multiple Spring Boot Versions. Visit [spring boot support This project supports different spring-data-commons versions. Visit [spring data version support](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-spring-data-cosmos#spring-data-version-support) for more information. -### Which Version of Azure Spring Datan Azure Cosmos DB Should I Use +### Which Version of Azure Spring Data Azure Cosmos DB Should I Use -Azure Spring Datan Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure spring datan Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Datan Azure Cosmos DB to use with Spring Boot / Spring Cloud version. +Azure Spring Data Azure Cosmos DB library supports multiple versions of Spring Boot / Spring Cloud. Refer to [azure Spring Data Azure Cosmos DB version mapping](https://github.com/Azure/azure-sdk-for-jav#which-version-of-azure-spring-data-cosmos-should-i-use) for detailed information on which version of Azure Spring Data Azure Cosmos DB to use with Spring Boot / Spring Cloud version. > [!IMPORTANT] -> These release notes are for version 3 of Spring Datan Azure Cosmos DB. +> These release notes are for version 3 of Spring Data Azure Cosmos DB. >-> Azure Spring Datan Azure Cosmos DB SDK has dependency on the Spring Data framework, and supports only the API for NoSQL. +> Azure Spring Data Azure Cosmos DB SDK has dependency on the Spring Data framework, and supports only the API for NoSQL. > > See these articles for information about Spring Data on other Azure Cosmos DB APIs: > * [Spring Data for Apache Cassandra with Azure Cosmos DB](/azure/developer/java/spring-framework/configure-spring-data-apache-cassandra-with-cosmos-db) Azure Spring Datan Azure Cosmos DB library supports multiple versions of Spring ## Get started fast - Get up and running with Spring Datan Azure Cosmos DB by following our [Spring Boot Starter guide](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Datan Azure Cosmos DB connector. + Get up and running with Spring Data Azure Cosmos DB by following our [Spring Boot Starter guide](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db). The Spring Boot Starter approach is the recommended way to get started using the Spring Data Azure Cosmos DB connector. - Alternatively, you can add the Spring Datan Azure Cosmos DB dependency to your `pom.xml` file as shown below: + Alternatively, you can add the Spring Data Azure Cosmos DB dependency to your `pom.xml` file as shown below: ```xml <dependency> Azure Spring Datan Azure Cosmos DB library supports multiple versions of Spring | Content | Link | |||-| **Release notes** | [Release notes for Spring Datan Azure Cosmos DB SDK v3](https://github.com/Azure/azure-sdk-for-jav) | -| **SDK Documentation** | [Azure Spring Datan Azure Cosmos DB SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) | +| **Release notes** | [Release notes for Spring Data Azure Cosmos DB SDK v3](https://github.com/Azure/azure-sdk-for-jav) | +| **SDK Documentation** | [Azure Spring Data Azure Cosmos DB SDK v3 documentation](https://github.com/Azure/azure-sdk-for-jav) | | **SDK download** | [Maven](https://mvnrepository.com/artifact/com.azure/azure-spring-data-cosmos) | | **API documentation** | [Java API reference documentation](/java/api/overview/azure/spring-data-cosmos-readme?view=azure-java-stable&preserve-view=true) | | **Contribute to SDK** | [Azure SDK for Java Central Repo on GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/cosmos/azure-spring-data-cosmos) | -| **Get started** | [Quickstart: Build a Spring Datan Azure Cosmos DB app to manage Azure Cosmos DB for NoSQL data](./quickstart-java-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) | -| **Basic code samples** | [Azure Cosmos DB: Spring Datan Azure Cosmos DB examples for the API for NoSQL](samples-java-spring-data.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)| +| **Get started** | [Quickstart: Build a Spring Data Azure Cosmos DB app to manage Azure Cosmos DB for NoSQL data](./quickstart-java-spring-data.md) <br> [GitHub repo with quickstart code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-getting-started) | +| **Basic code samples** | [Azure Cosmos DB: Spring Data Azure Cosmos DB examples for the API for NoSQL](samples-java-spring-data.md) <br> [GitHub repo with sample code](https://github.com/Azure-Samples/azure-spring-data-cosmos-java-sql-api-samples)| | **Performance tips**| [Performance tips for Java SDK v4 (applicable to Spring Data)](performance-tips-java-sdk-v4.md)| | **Troubleshooting** | [Troubleshoot Java SDK v4 (applicable to Spring Data)](troubleshoot-java-sdk-v4.md) | | **Azure Cosmos DB workshops and labs** |[Azure Cosmos DB workshops home page](https://aka.ms/cosmosworkshop) It's strongly recommended to use version 3.28.1 and above. ## Additional notes -* Spring Datan Azure Cosmos DB supports Java JDK 8 and Java JDK 11. +* Spring Data Azure Cosmos DB supports Java JDK 8 and Java JDK 11. ## FAQ |
cosmos-db | Tutorial Springboot Azure Kubernetes Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-springboot-azure-kubernetes-service.md | Here are some of the key points related to the Kubernetes resources for this app In this tutorial, you've learned how to deploy a Spring Boot application to Azure Kubernetes Service and use it to perform operations on data in an Azure Cosmos DB for NoSQL account. > [!div class="nextstepaction"]-> [Spring Datan Azure Cosmos DB v3 for API for NoSQL](sdk-java-spring-data-v3.md) +> [Spring Data Azure Cosmos DB v3 for API for NoSQL](sdk-java-spring-data-v3.md) |
data-factory | Managed Virtual Network Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md | The column **Using private endpoint** is always shown as blank even if you creat :::image type="content" source="./media/managed-vnet/akv-pe.png" alt-text="Screenshot that shows a private endpoint for Key Vault."::: +### Fully Qualified Domain Name ( FQDN ) of Azure HDInsight ++If you created a custom private link service, FQDN should end with **azurehdinsight.net** without leading *privatelink* in domain name when you create a private end point. If you use privatelink in domain name, make sure it is valid and you are able to resolve it. + ### Access constraints in managed virtual network with private endpoints You're unable to access each PaaS resource when both sides are exposed to Private Link and a private endpoint. This issue is a known limitation of Private Link and private endpoints. |
defender-for-cloud | Devops Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-faq.md | Title: Defender for DevOps FAQ description: If you're having issues with Defender for DevOps perhaps, you can solve it with these frequently asked questions. Previously updated : 01/26/2023 Last updated : 02/23/2023 # Defender for DevOps frequently asked questions (FAQ) If you're having issues with Defender for DevOps these frequently asked question - [Is Exemptions capability available and tracked for app sec vulnerability management](#is-exemptions-capability-available-and-tracked-for-app-sec-vulnerability-management) - [Is continuous, automatic scanning available?](#is-continuous-automatic-scanning-available) - [Is it possible to block the developers committing code with exposed secrets](#is-it-possible-to-block-the-developers-committing-code-with-exposed-secrets)-- [I am not able to configure Pull Request Annotations](#i-am-not-able-to-configure-pull-request-annotations)-- [What are the programing languages that are supported by Defender for DevOps?](#what-are-the-programing-languages-that-are-supported-by-defender-for-devops) -- [I'm getting the There's no CLI tool error in Azure DevOps](#im-getting-the-theres-no-cli-tool-error-in-azure-devops)-+- [I'm not able to configure Pull Request Annotations](#im-not-able-to-configure-pull-request-annotations) +- [What programming languages are supported by Defender for DevOps?](#what-programming-languages-are-supported-by-defender-for-devops) +- [I'm getting an error that informs me that there's no CLI tool](#im-getting-an-error-that-informs-me-that-theres-no-cli-tool) ### I'm getting an error while trying to connect -When selecting the *Authorize* button, the presently signed-in account is used, which could be the same email but different tenant. Make sure you have the right account/tenant combination selected in the popup consent screen and Visual Studio. +When you select the *Authorize* button, the account that you're logged in with is used. That account can have the same email but may have a different tenant. Make sure you have the right account/tenant combination selected in the popup consent screen and Visual Studio. -The presently signed-in account can be checked [here](https://app.vssps.visualstudio.com/profile/view). +You can [check which account is signed in](https://app.vssps.visualstudio.com/profile/view). ### Why can't I find my repository -Only TfsGit is supported on Azure DevOps service. +The Azure DevOps service only supports `TfsGit`. -Ensure that you've [onboarded your repositories](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Microsoft Defender for Cloud. If you still can't see your repository, ensure that you're signed in with the correct Azure DevOps organization user account. Your Azure subscription and Azure DevOps Organization need to be in the same tenant. If the user for the connector is wrong, you need to delete the connector that was created, sign in with the correct user account and re-create the connector. +Ensure that you've [onboarded your repositories](/azure/defender-for-cloud/quickstart-onboard-devops?branch=main) to Microsoft Defender for Cloud. If you still can't see your repository, ensure that you're signed in with the correct Azure DevOps organization user account. Your Azure subscription and Azure DevOps Organization need to be in the same tenant. If the user for the connector is wrong, you need to delete the previously created connector, sign in with the correct user account and re-create the connector. ### Secret scan didn't run on my code In addition to onboarding resources, you must have the [Microsoft Security DevOp If no secrets are identified through scans, the total exposed secret for the resource shows `Healthy` in Defender for Cloud. -If secret scan isn't enabled (meaning MSDO isn't configured for your pipeline) or a scan isn't performed for at least 14 days, the resource will show as `N/A` in Defender for Cloud. +If secret scan isn't enabled (meaning MSDO isn't configured for your pipeline) or a scan isn't performed for at least 14 days, the resource shows as `N/A` in Defender for Cloud. ### I donΓÇÖt see generated SARIF file in the path I chose to drop it Azure DevOps repositories only have the total exposed secrets available and will For a previously unhealthy scan result to be healthy again, updated healthy scan results need to be from the same build definition as the one that generated the findings in the first place. A common scenario where this issue occurs is when testing with different pipelines. For results to refresh appropriately, scan results need to be for the same pipeline(s) and branch(es). -If no scanning is performed for 14 days, the scan results would be revert to ΓÇ£N/AΓÇ¥. +If no scan is performed for 14 days, the scan results revert to `N/A`. ### I donΓÇÖt see Recommendations for findings Learn more about [Microsoft Privacy Statement](https://go.microsoft.com/fwLink/? ### Is Exemptions capability available and tracked for app sec vulnerability management? -Exemptions are not available for Defender for DevOps within Microsoft Defender for Cloud. +Exemptions aren't available for Defender for DevOps within Microsoft Defender for Cloud. ### Is continuous, automatic scanning available? Currently scanning occurs at build time. ### Is it possible to block the developers committing code with exposed secrets? -The ability to block developers from committing code with exposed secrets is not currently available. +The ability to block developers from committing code with exposed secrets isn't currently available. -### I am not able to configure Pull Request Annotations +### I'm not able to configure Pull Request Annotations Make sure you have write (owner/contributor) access to the subscription. -### What are the programing languages that are supported by Defender for DevOps? +### What programming languages are supported by Defender for DevOps? The following languages are supported by Defender for DevOps: - Python-- Java Script-- Type Script+- JavaScript +- TypeScript ++### I'm getting an error that informs me that there's no CLI tool ++When you run the pipeline in Azure DevOps, you receive the following error: +`no such file or directory, scandir 'D:\a\_msdo\versions\microsoft.security.devops.cli'`. + -### I'm getting the There's no CLI tool error in Azure DevOps +This error can be seen in the extensions job as well. -If when running the pipeline in Azure DevOps, you receive the following error: -"no such file or directory, scandir 'D:\a\_msdo\versions\microsoft.security.devops.cli'". -This error occurs if you are missing the dependency of `dotnet6` in the pipeline's YAML file. DotNet6 is required to allow the Microsoft Security DevOps extension to run. Include this as a task in your YAML file to eliminate the error. +This error occurs if you're missing the dependency of `dotnet6` in the pipeline's YAML file. DotNet6 is required to allow the Microsoft Security DevOps extension to run. Include this as a task in your YAML file to eliminate the error. You can learn more about [Microsoft Security DevOps](https://marketplace.visualstudio.com/items?itemName=ms-securitydevops.microsoft-security-devops-azdevops). |
defender-for-iot | Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md | For more information, see [Securing IoT devices in the enterprise](concept-enter ## Managing OT alerts in a hybrid environment -Users working in hybrid environments may be managing OT alerts in Defender for IoT on the Azure portal, the OT sensor, and an on-premises management console. +Users working in hybrid environments may be managing OT alerts in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console. Alert statuses are fully synchronized between the Azure portal and the OT sensor, and between the sensor and the on-premises management console. This means that regardless of where you manage the alert in Defender for IoT, the alert is updated in other locations as well. |
defender-for-iot | Architecture Connections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture-connections.md | Title: OT sensor cloud connection methods - Microsoft Defender for IoT description: Learn about the architecture models available for connecting your sensors to Microsoft Defender for IoT. Previously updated : 09/11/2022 Last updated : 02/23/2023 # OT sensor cloud connection methods For more information, see [Connect via proxy chaining](connect-sensors.md#connec ## Direct connections -The following image shows how you can connect your sensors to the Defender for IoT portal in Azure directly over the internet from remote sites, without transversing the enterprise network. +The following image shows how you can connect your sensors to the Defender for IoT portal in Azure directly over the internet from remote sites, without traversing the enterprise network. With direct connections |
defender-for-iot | Concept Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md | While the number of IoT devices continues to grow, they often lack the security ## IoT security across Microsoft 365 Defender and Azure -Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and Azure portals using the following methods: +Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and [Azure portals](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) using the following methods: |Method |Description and requirements | Configure in ... | |||| |
defender-for-iot | Configure Sensor Settings Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md | Define a new setting whenever you want to define a specific configuration for on **To define a new setting**: -1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)**. 1. On the **Sensor settings (Preview)** page, select **+ Add**, and then use the wizard to define the following values for your setting. Select **Next** when you're done with each tab in the wizard to move to the next step. Your new setting is now listed on the **Sensor settings (Preview)** page under i **To view the current settings already defined for your subscription**: -1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)** +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor settings (Preview)** The **Sensor settings (Preview)** page shows any settings already defined for your subscriptions, listed by setting type. Expand or collapse each type to view detailed configurations. For example: |
defender-for-iot | Faqs Eiot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-eiot.md | Enterprise IoT is designed to help customers secure un-managed devices throughou For more information, see [Onboard with Microsoft Defender for IoT](eiot-defender-for-endpoint.md). -- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in Defender for IoT in the Azure portal. Register an Enterprise IoT network sensor, currently in **Public preview** to gain visibility to additional devices that aren't covered by Defender for Endpoint.+- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. Register an Enterprise IoT network sensor, currently in **Public preview** to gain visibility to additional devices that aren't covered by Defender for Endpoint. For more information, see [Enhance device discovery with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md). To make any changes to an existing plan, you'll need to cancel your existing pla To remove only Enterprise IoT from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see [Cancel your Defender for IoT plan](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan). -To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in Defender for IoT in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan). +To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan). ## What happens when the 30-day trial ends? |
defender-for-iot | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md | This procedure describes how to add a trial Defender for IoT plan for OT network **To add your plan**: -1. In the Azure portal, go to **Defender for IoT** and select **Plans and pricing** > **Add plan**. +1. In the Azure portal, go to [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) and select **Plans and pricing** > **Add plan**. 1. In the **Plan settings** pane, define the following settings: |
defender-for-iot | How To Manage Cloud Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md | For more information, see [Azure user roles and permissions for Defender for IoT ## View alerts on the Azure portal -1. In Defender for IoT on the Azure portal, select the **Alerts** page on the left. By default, the following details are shown in the grid: +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left. By default, the following details are shown in the grid: | Column | Description |--|--| Supported grouping options include *Engine*, *Name*, *Sensor*, *Severity*, and * ## Manage alert severity and status -We recommend that you update alert severity as soon as you've triaged an alert so that you can prioritize the riskiest alerts as soon as possible. Make sure to update your alert status once you've taken remediation steps so that the progress is recorded. +We recommend that you update alert severity In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal as soon as you've triaged an alert so that you can prioritize the riskiest alerts as soon as possible. Make sure to update your alert status once you've taken remediation steps so that the progress is recorded. You can update both severity and status for a single alert or for a selection of alerts in bulk. Downloading the PCAP file can take several minutes, depending on the quality of You may want to export a selection of alerts to a CSV file for offline sharing and reporting. -1. In Defender for IoT on the Azure portal, select the **Alerts** page on the left. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left. 1. Use the search box and filter options to show only the alerts you want to export. |
defender-for-iot | How To Manage Device Inventory For Organizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md | -Use the **Device inventory** page in the Azure portal to manage all network devices detected by cloud-connected sensors, including OT, IoT, and IT. Identify new devices detected, devices that might need troubleshooting, and more. +Use the **Device inventory** page in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal to manage all network devices detected by cloud-connected sensors, including OT, IoT, and IT. Identify new devices detected, devices that might need troubleshooting, and more. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device). |
defender-for-iot | How To Manage Individual Sensors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md | Select the link in each widget to drill down for more information in your sensor ### Validate connectivity status -Verify that your sensor is successfully connected to the Azure portal directly from the sensor's **Overview** page. +Verify that your sensor is successfully connected to the Azure portal directly from the sensor's **Overview** page. If there are any connection issues, a disconnection message is shown in the **General Settings** area on the **Overview** page, and a **Service connection error** warning appears at the top of the page in the :::image type="icon" source="media/how-to-manage-individual-sensors/bell-icon.png" border="false"::: **System Messages** area. For example: If there are any connection issues, a disconnection message is shown in the **Ge :::image type="content" source="media/how-to-manage-individual-sensors/system-messages.png" alt-text="Screenshot of the system messages pane." lightbox="media/how-to-manage-individual-sensors/system-messages.png"::: - ## Download software for OT sensors You may need to download software for your OT sensor if you're [installing Defender for IoT software](ot-deploy/install-software-ot-sensor.md) on your own appliances, or [updating software versions](update-ot-software.md). -In Defender for IoT in the Azure portal, use one of the following options: +In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, use one of the following options: - For a new installation, select **Getting started** > **Sensor**. Select a version in the **Purchase an appliance and install software** area, and then select **Download**. You'll need an SMTP mail server configured to enable email alerts about disconne **Prerequisites**: -Make sure you can reach the SMTP server from the [sensor's management port](./best-practices/understand-network-architecture.md). +Make sure you can reach the SMTP server from the [sensor's management port](./best-practices/understand-network-architecture.md). **To configure an SMTP server on your sensor**: |
defender-for-iot | How To Manage The On Premises Management Console | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-the-on-premises-management-console.md | This article covers on-premises management console options like backup and resto You may need to download software for your on-premises management console if you're [installing Defender for IoT software](ot-deploy/install-software-on-premises-management-console.md) on your own appliances, or [updating software versions](update-ot-software.md). -In Defender for IoT in the Azure portal, use one of the following options: +In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, use one of the following options: -- For a new installation or standalone update, select **Getting started** > **On-premises management console**. +- For a new installation or standalone update, select **Getting started** > **On-premises management console**. - - For a new installation, select a version in the **Purchase an appliance and install software** area, and then select **Download**. + - For a new installation, select a version in the **Purchase an appliance and install software** area, and then select **Download**. - For an update, select your update scenario in the **On-premises management console** area and then select **Download**. - If you're updating your on-premises management console together with connected OT sensors, use the options in the **Sites and sensors** page > **Sensor update (Preview)** menu. In Defender for IoT in the Azure portal, use one of the following options: [!INCLUDE [root-of-trust](includes/root-of-trust.md)] For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md#update-an-on-premises-management-console).+ ## Upload an activation file When you first sign in, an activation file for the on-premises management console is downloaded. This file contains the aggregate committed devices that are defined during the onboarding process. The list includes sensors associated with multiple subscriptions. |
defender-for-iot | How To Work With Threat Intelligence Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md | To perform the procedures in this article, make sure that you have: - Relevant permissions on the Azure portal and any OT network sensors or on-premises management console you want to update. - - **To download threat intelligence packages from the Azure portal**, you need access to the Azure portal as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role. + - **To download threat intelligence packages from the Azure portal**, you need access to the Azure portal as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role. - - **To push threat intelligence updates to cloud-connected OT sensors from the Azure portal**, you need access to Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role. + - **To push threat intelligence updates to cloud-connected OT sensors from the Azure portal**, you need access to Azure portal as a [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role. - - **To manually upload threat intelligence packages to OT sensors or on-premises management consoles**, you need access to the OT sensor or on-premises management console as an **Admin** user. + - **To manually upload threat intelligence packages to OT sensors or on-premises management consoles**, you need access to the OT sensor or on-premises management console as an **Admin** user. For more information, see [Azure user roles and permissions for Defender for IoT](roles-azure.md) and [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). - ## View the most recent threat intelligence package To view the most recent package delivered, in the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**. Update threat intelligence packages on your OT sensors using any of the followin ### Automatically push updates to cloud-connected sensors -Threat intelligence packages can be automatically updated to *cloud connected* sensors as they're released by Defender for IoT. +Threat intelligence packages can be automatically updated to *cloud connected* sensors as they're released by Defender for IoT. Ensure automatic package update by onboarding your cloud connected sensor with the **Automatic Threat Intelligence Updates** option enabled. For more information, see [Onboard a sensor](tutorial-onboarding.md#onboard-and-activate-the-virtual-sensor). **To change the update mode after you've onboarded your OT sensor**: -1. In Defender for IoT on the Azure portal, select **Sites and sensors**, and then locate the sensor you want to change. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**, and then locate the sensor you want to change. 1. Select the options (**...**) menu for the selected OT sensor > **Edit**. 1. Toggle on or toggle off the **Automatic Threat Intelligence Updates** option as needed. Your *cloud connected* sensors can be automatically updated with threat intellig **To manually push updates to a single OT sensor**: -1. In Defender for IoT on the Azure portal, select **Sites and sensors**, and locate the OT sensor you want to update. -1. Select the options (**...**) menu for the selected sensor and then select **Push Threat Intelligence update**. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**, and locate the OT sensor you want to update. +1. Select the options (**...**) menu for the selected sensor and then select **Push Threat Intelligence update**. The **Threat Intelligence update status** field displays the update progress. **To manually push updates to multiple OT sensors**: -1. In Defender for IoT on the Azure portal, select **Sites and sensors**. Locate and select the OT sensors you want to update. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors**. Locate and select the OT sensors you want to update. 1. Select **Threat intelligence updates (Preview)** > **Remote update**. The **Threat Intelligence update status** field displays the update progress for each selected sensor. If you're also working with an on-premises management console, we recommend that **To download threat intelligence packages**: -1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Threat intelligence update (Preview)** > **Local update**. 1. In the **Sensor TI update** pane, select **Download** to download the latest threat intelligence file. For example: On each OT sensor, the threat intelligence update status and version information For cloud-connected OT sensors, threat intelligence data is also shown in the **Sites and sensors** page. To view threat intelligence statues from the Azure portal: -1. In Defender for IoT on the Azure portal, select **Site and sensors**. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Site and sensors**. 1. Locate the OT sensors where you want to check the threat intelligence statues. For cloud-connected OT sensors, threat intelligence data is also shown in the ** > [!TIP] > If a cloud-connected OT sensor shows that a threat intelligence update has failed, we recommend that your check your sensor connection details. On the **Sites and sensors** page, check the **Sensor status** and **Last connected UTC** columns. - ## Next steps For more information, see: |
defender-for-iot | Manage Subscriptions Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md | This procedure describes how to add an Enterprise IoT plan to your Azure subscri :::image type="content" source="media/enterprise-iot/defender-for-endpoint-onboard.png" alt-text="Screenshot of the Enterprise IoT tab in Defender for Endpoint." lightbox="media/enterprise-iot/defender-for-endpoint-onboard.png"::: -After you've onboarded your plan, you'll see it listed in Defender for IoT in the Azure portal. Go to the Defender for IoT **Plans and pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example: +After you've onboarded your plan, you'll see it listed in [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. Go to the Defender for IoT **Plans and pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example: :::image type="content" source="media/enterprise-iot/eiot-plan-in-azure.png" alt-text="Screenshot of an Enterprise IoT plan showing in the Defender for IoT Plans and pricing page."::: |
defender-for-iot | Respond Ot Alert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/respond-ot-alert.md | Triage alerts on a regular basis to prevent alert fatigue in your network and en **To triage alerts**: -1. In Defender for IoT in the Azure portal, go to the **Alerts** page. By default, alerts are sorted by the **Last detection** column, from most recent to oldest alert, so that you can first see the latest alerts in your network. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, go to the **Alerts** page. By default, alerts are sorted by the **Last detection** column, from most recent to oldest alert, so that you can first see the latest alerts in your network. 1. Use other filters, such as **Sensor** or **Severity** to find specific alerts. |
defender-for-iot | Update Ot Software | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md | This procedure describes how to send a software version update to one or more OT ### Send the software update to your OT sensor -1. In Defender for IoT in the Azure portal, select **Sites and sensors** and then locate the OT sensors with legacy, but [supported versions](#prerequisites) installed. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal, select **Sites and sensors** and then locate the OT sensors with legacy, but [supported versions](#prerequisites) installed. If you know your site and sensor name, you can browse or search for it directly. Alternately, filter the sensors listed to show only cloud-connected, OT sensors that have *Remote updates supported*, and have legacy software version installed. For example: This procedure describes how to manually download the new sensor software versio ### Download the update package from the Azure portal -1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. 1. In the **Local update** pane, select the software version that's currently installed on your sensors. The software version on your on-premises management console must be equal to tha > ### Download the update packages from the Azure portal -1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. 1. In the **Local update** pane, select the software version that's currently installed on your sensors. This procedure describes how to update OT sensor software via the CLI, directly ### Download the update package from the Azure portal -1. In Defender for IoT on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** > **Sensor update (Preview)**. 1. In the **Local update** pane, select the software version that's currently installed on your sensors. Updating an on-premises management console takes about 30 minutes. This procedure describes how to download an update package for a standalone update. If you're updating your on-premises management console together with connected sensors, we recommend using the **[Update sensors (Preview)](#update-ot-sensors)** menu from on the **Sites and sensors** page instead. -1. In Defender for IoT on the Azure portal, select **Getting started** > **On-premises management console**. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Getting started** > **On-premises management console**. 1. In the **On-premises management console** area, select the download scenario that best describes your update, and then select **Download**. For more information, see [Versioning and support for on-premises software versi **To update a legacy OT sensor version** -1. In Defender for IoT on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update. +1. In [Defender for IoT](https://ms.portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select **Sites and sensors** and then select the legacy OT sensor you want to update. 1. Select the **Prepare to update to 22.X** option from the toolbar or from the options (**...**) from the sensor row. |
event-grid | Event Schema Data Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-data-box.md | + + Title: Azure Data Box as Event Grid source +description: Describes the properties that are provided for Data Box events with Azure Event Grid. + Last updated : 02/09/2023+++# Azure Data Box as an Event Grid source ++This article provides the properties and schema for Azure Data Box events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md). ++## Data Box events ++ |Event name |Description| + |-|--| + | Microsoft.DataBox.CopyStarted |Triggered when the copy has started from the device and the first byte of data copy is copied. | + |Microsoft.DataBox.CopyCompleted |Triggered when the copy has completed from device.| + | Microsoft.DataBox.OrderCompleted |Triggered when the order has completed copying and copy logs are available. | ++### Example events ++# [Event Grid event schema](#tab/event-grid-event-schema) ++### Microsoft.DataBox.CopyStarted event ++```json +[{ + "topic": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}", + "subject": "/jobs/{your-resource}", + "eventType": "Microsoft.DataBox.CopyStarted", + "id": "049ec3f6-5b7d-4052-858e-6f4ce6a46570", + "data": { + "serialNumber": "SampleSerialNumber", + "stageName": "CopyStarted", + "stageTime": "2022-10-12T19:38:08.0218897Z" + }, + "dataVersion": "1", + "metadataVersion": "1", + "eventTime": "2022-10-16T02:51:26.4248221Z" +}] +``` ++### Microsoft.DataBox.CopyCompleted event ++```json +[{ + "topic": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}", + "subject": "/jobs/{your-resource}", + "eventType": "Microsoft.DataBox.CopyCompleted", + "id": "759c892a-a628-4e48-a116-2e1d54c555ce", + "data": { + "serialNumber": "SampleSerialNumber", + "stageName": "CopyCompleted", + "stageTime": "2022-10-12T19:38:08.0218897Z" + }, + "dataVersion": "1", + "metadataVersion": "1", + "eventTime": "2022-10-16T02:58:18.503829Z" +}] +``` ++### Microsoft.DataBox.OrderCompleted event ++```json +{ + "topic": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}", + "subject": "/jobs/{your-resource}", + "eventType": "Microsoft.DataBox.OrderCompleted", + "id": "5eb07c79-39a8-439c-bb4b-bde1f6267c37", + "data": { + "serialNumber": "SampleSerialNumber", + "stageName": "OrderCompleted", + "stageTime": "2022-10-12T19:38:08.0218897Z" + }, + "dataVersion": "1", + "metadataVersion": "1", + "eventTime": "2022-10-16T02:51:26.4248221Z" +} +``` ++# [Cloud event schema](#tab/cloud-event-schema) ++### Microsoft.DataBox.CopyStarted event ++```json +[{ + "source": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}", + "subject": "/jobs/{your-resource}", + "type": "Microsoft.DataBox.CopyStarted", + "time": "2022-10-16T02:51:26.4248221Z", + "id": "049ec3f6-5b7d-4052-858e-6f4ce6a46570", + "data": { + "serialNumber": "SampleSerialNumber", + "stageName": "CopyStarted", + "stageTime": "2022-10-12T19:38:08.0218897Z" + }, + "specVersion": "1.0" +}] +``` ++### Microsoft.DataBox.CopyCompleted event ++```json +{ + "source": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}", + "subject": "/jobs/{your-resource}", + "type": "Microsoft.DataBox.CopyCompleted", + "time": "2022-10-16T02:51:26.4248221Z", + "id": "759c892a-a628-4e48-a116-2e1d54c555ce", + "data": { + "serialNumber": "SampleSerialNumber", + "stageName": "CopyCompleted", + "stageTime": "2022-10-12T19:38:08.0218897Z" + }, + "specVersion": "1.0" +} +``` ++### Microsoft.DataBox.OrderCompleted event ++```json +[{ + "source": "/subscriptions/{subscription-id}/resourceGroups/{your-rg}/providers/Microsoft.DataBox/jobs/{your-resource}", + "subject": "/jobs/{your-resource}", + "type": "Microsoft.DataBox.OrderCompleted", + "time": "2022-10-16T02:51:26.4248221Z", + "id": "5eb07c79-39a8-439c-bb4b-bde1f6267c37", + "data": { + "serialNumber": "SampleSerialNumber", + "stageName": "OrderCompleted", + "stageTime": "2022-10-12T19:38:08.0218897Z" + }, + "specVersion": "1.0" +}] +``` ++++## Next steps ++* For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md) +* For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md). |
frontdoor | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/best-practices.md | Front Door's features work best when traffic only flows through Front Door. You When you work with Front Door by using APIs, ARM templates, Bicep, or Azure SDKs, it's important to use the latest available API or SDK version. API and SDK updates occur when new functionality is available, and also contain important security patches and bug fixes. +### Configure logs ++Front Door tracks extensive telemetry about every request. When you enable caching, your origin servers might not receive every request, so it's important that you use the Front Door logs to understand how your solution is running and responding to your clients. For more information about the metrics and logs that Azure Front Door records, see [Monitor metrics and logs in Azure Front Door](front-door-diagnostics.md) and [WAF logs](../web-application-firewall/afds/waf-front-door-monitor.md#waf-logs). ++To configure logging for your own application, see [Configure Azure Front Door logs](./standard-premium/how-to-logs.md) + ## TLS best practices ### Use end-to-end TLS You can configure Front Door to automatically redirect HTTP requests to use the ### Use managed TLS certificates -When Front Door manages your TLS certificates, it reduces your operational costs, and helps you to avoid costly outages caused by forgetting to renew a certificate. Front Door automatically issues and rotates managed TLS certificates. +When Front Door manages your TLS certificates, it reduces your operational costs, and helps you to avoid costly outages caused by forgetting to renew a certificate. Front Door automatically issues and rotates the managed TLS certificates. For more information, see [Configure HTTPS on an Azure Front Door custom domain using the Azure portal](standard-premium/how-to-configure-https-custom-domain.md). For more information, see [Select the certificate for Azure Front Door to deploy ### Use the same domain name on Front Door and your origin -Front Door can rewrite the `Host` header of incoming requests. This feature can be helpful when you manage a set of customer-facing custom domain names that route to a single origin. The feature can also help when you want to avoid configuring custom domain names in Front Door and at your origin. However, when you rewrite the `Host` header, request cookies and URL redirections might break. In particular, when you use platforms like Azure App Service, features like [session affinity](../app-service/configure-common.md#configure-general-settings) and [authentication and authorization](../app-service/overview-authentication-authorization.md) might not work correctly. +Front Door can rewrite the `Host` header of incoming requests. This feature can be helpful when you manage a set of customer-facing custom domain names that route to a single origin. This feature can also help when you want to avoid configuring custom domain names in Front Door and at your origin. However, when you rewrite the `Host` header, request cookies and URL redirections might break. In particular, when you use platforms like Azure App Service, features like [session affinity](../app-service/configure-common.md#configure-general-settings) and [authentication and authorization](../app-service/overview-authentication-authorization.md) might not work correctly. Before you rewrite the `Host` header of your requests, carefully consider whether your application is going to work correctly. For more information, see [Preserve the original HTTP host name between a revers ### Enable the WAF -For internet-facing applications, we recommend you enable the Front Door web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a range of attacks. +For internet-facing applications, we recommend you enable the Front Door web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a wide range of attacks. For more information, see [Web Application Firewall (WAF) on Azure Front Door](web-application-firewall.md). |
frontdoor | Front Door Caching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md | If there's more than one key-value pair in a query string of a request then thei When you configure caching, you specify how the cache should handle query strings. The following behaviors are supported: -* **Ignore query strings**: In this mode, Azure Front Door passes the query strings from the client to the origin on the first request and caches the asset. All ensuing requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires. +* **Ignore Query String**: In this mode, Azure Front Door passes the query strings from the client to the origin on the first request and caches the asset. Future requests for the asset that are served from the Front Door environment ignore the query strings until the cached asset expires. -* **Cache every unique URL**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the origin for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting. +* **Use Query String**: In this mode, each request with a unique URL, including the query string, is treated as a unique asset with its own cache. For example, the response from the origin for a request for `www.example.ashx?q=test1` is cached at the Front Door environment and returned for ensuing caches with the same query string. A request for `www.example.ashx?q=test2` is cached as a separate asset with its own time-to-live setting. ++ The order of the query string parameters doesn't matter. For example, if the Azure Front Door environment includes a cached response for the URL `www.example.ashx?q=test1&r=test2`, then a request for `www.example.ashx?r=test2&q=test1` is also served from the cache. ::: zone pivot="front-door-standard-premium" -* **Specify cache key query string** behavior to include or exclude specified parameters when the cache key is generated. +* **Ignore Specified Query Strings** and **Include Specified Query Strings**: In this mode, you can configure Azure Front Door to include or exclude specified parameters when the cache key is generated. - For example, suppose that the default cache key is `/foo/image/asset.html`, and a request is made to the URL `https://contoso.com//foo/image/asset.html?language=EN&userid=100&sessionid=200`. If there's a rules engine rule to exclude the `userid` query string parameter, then the query string cache key would be `/foo/image/asset.html?language=EN&sessionid=200`. + For example, suppose that the default cache key is `/foo/image/asset.html`, and a request is made to the URL `https://contoso.com/foo/image/asset.html?language=EN&userid=100&sessionid=200`. If there's a rules engine rule to exclude the `userid` query string parameter, then the query string cache key would be `/foo/image/asset.html?language=EN&sessionid=200`. Configure the query string behavior on the Front Door route. In addition, Front Door attaches the `X-Cache` header to all responses. The `X-C - `PRIVATE_NOSTORE`: Request can't be cached because the *Cache-Control* response header is set to either *private* or *no-store*. - `CONFIG_NOCACHE`: Request is configured to not cache in the Front Door profile. +## Logs and reports + ::: zone pivot="front-door-standard-premium" -## Logs and reports +The [access log](front-door-diagnostics.md#access-log) includes the cache status for each request. Also, [reports](standard-premium/how-to-reports.md#caching-report) include information about how Azure Front Door's cache is used in your application. ++ -The [Front Door Access Log](standard-premium/how-to-logs.md#access-log) includes the cache status for each request. Also, [reports](standard-premium/how-to-reports.md#caching) include information about how Front Door's cache is used in your application. +The [access log](front-door-diagnostics.md#access-log) includes the cache status for each request. ::: zone-end Cache behavior and duration can be configured in Rules Engine. Rules Engine cach * **When caching is disabled**, Azure Front Door doesnΓÇÖt cache the response contents, irrespective of the origin response directives. -* **When caching is enabled**, the cache behavior differs based on the cache behavior value applied by the Rules Engine: +* **When caching is enabled**, the cache behavior is different depending on the cache behavior value applied by the Rules Engine: * **Honor origin**: Azure Front Door will always honor origin response header directive. If the origin directive is missing, Azure Front Door will cache contents anywhere from one to three days. * **Override always**: Azure Front Door will always override with the cache duration, meaning that it will cache the contents for the cache duration ignoring the values from origin response directives. This behavior will only be applied if the response is cacheable. |
frontdoor | Front Door Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-diagnostics.md | Title: Monitor metrics and logs in Azure Front Door (classic) -description: This article describes the different metrics and access logs that Azure Front Door (classic) supports + Title: Monitor metrics and logs - Azure Front Door +description: This article describes the different metrics and logs that Azure Front Door records. Previously updated : 03/22/2022 Last updated : 02/23/2023 +zone_pivot_groups: front-door-tiers -# Monitor metrics and logs in Azure Front Door (classic) +# Monitor metrics and logs in Azure Front Door ++Azure Front Door provides several features to help you monitor your application, track requests, and debug your Front Door configuration. ++Logs and metrics are stored and managed by [Azure Monitor](../azure-monitor/overview.md). +++[Reports](standard-premium/how-to-reports.md) provide insight into how your traffic is flowing through Azure Front Door, the web application firewall (WAF), and to your application. ++## Metrics ++Azure Front Door measures and sends its metrics in 60-second intervals. The metrics can take up to 3 minutes to be processed by Azure Monitor, and they might not appear until processing is completed. Metrics can also be displayed in charts or grids, and are accessible through the Azure portal, Azure PowerShell, the Azure CLI, and the Azure Monitor APIs. For more information, see [Azure Monitor metrics](../azure-monitor/essentials/data-platform-metrics.md). ++The metrics listed in the following table are recorded and stored free of charge for a limited period of time. For an extra cost, you can store for a longer period of time. ++| Metrics | Description | Dimensions | +| - | - | - | +| Byte Hit Ratio | The percentage of traffic that was served from the Azure Front Door cache, computed against the total egress traffic. The byte hit ratio is low if most of the traffic is forwarded to the origin rather than served from the cache. <br/><br/> **Byte Hit Ratio** = (egress from edge - egress from origin)/egress from edge. <br/><br/> Scenarios excluded from bytes hit ratio calculations:<ul><li>You explicitly disable caching, either through the Rules Engine or query string caching behavior.</li><li>You explicitly configure a `Cache-Control` directive with the `no-store` or `private` cache directives.</li></ul> | Endpoint | +| Origin Health Percentage | The percentage of successful health probes sent from Azure Front Door to origins.| Origin, Origin Group | +| Origin Latency | The time calculated from when the request was sent by the Azure Front Door edge to the origin until Azure Front Door received the last response byte from the origin. | Endpoint, Origin | +| Origin Request Count | The number of requests sent from Azure Front Door to origins. | Endpoint, Origin, HTTP Status, HTTP Status Group | +| Percentage of 4XX | The percentage of all the client requests for which the response status code is 4XX. | Endpoint, Client Country, Client Region | +| Percentage of 5XX | The percentage of all the client requests for which the response status code is 5XX. | Endpoint, Client Country, Client Region | +| Request Count | The number of client requests served through Azure Front Door, including requests served entirely from the cache. | Endpoint, Client Country, Client Region, HTTP Status, HTTP Status Group | +| Request Size | The number of bytes sent in requests from clients to Azure Front Door. | Endpoint, Client Country, client Region, HTTP Status, HTTP Status Group | +| Response Size | The number of bytes sent as responses from Front Door to clients. |Endpoint, client Country, client Region, HTTP Status, HTTP Status Group | +| Total Latency | The total time taken from when the client request was received by Azure Front Door until the last response byte was sent from Azure Front Door to the client. |Endpoint, Client Country, Client Region, HTTP Status, HTTP Status Group | +| Web Application Firewall Request Count | The number of requests processed by the Azure Front Door web application firewall. | Action, Policy Name, Rule Name | ++> [!NOTE] +> If a request to the origin times out, the value of the *Http Status* dimension is **0**. ++## Logs ++Logs track all requests that pass through Azure Front Door. It can take a few minutes for logs to be processed and stored. ++There are multiple Front Door logs, which you can use for different purposes: ++- [Access logs](#access-log) can be used to identify slow requests, determine error rates, and understand how Front Door's caching behavior is working for your solution. +- Web application firewall (WAF) logs can be used to detect potential attacks, and false positive detections that might indicate legitimate requests that the WAF blocked. For more information on the WAF logs, see [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md). +- [Health probe logs](#health-probe-log) can be used to identify origins that are unhealthy or that don't respond to requests from some of Front Door's geographically distributed PoPs. +- [Activity logs](#activity-logs) provide visibility into the operations performed on your Azure resources, such as configuration changes to your Azure Front Door profile. ++The activity log and web application firewall log includes a *tracking reference*, which is also propagated in requests to origins and to client responses by using the `X-Azure-Ref` header. You can use the tracking reference to gain an end-to-end view of your application request processing. ++Access logs, health probe logs, and WAF logs aren't enabled by default. To enable and store your diagnostic logs, see [Configure Azure Front Door logs](./standard-premium/how-to-logs.md). Activity log entries are collected by default, and you can view them in the Azure portal. ++## <a name="access-log"></a>Access log ++Information about every request is logged into the access log. Each access log entry contains the information listed in the following table. ++| Property | Description | +|-|-| +| TrackingReference | The unique reference string that identifies a request served by Azure Front Door. The tracking reference is sent to the client and to the origin by using the `X-Azure-Ref` headers. Use the tracking reference when searching for a specific request in the access or WAF logs. | +| Time | The date and time when the Azure Front Door edge delivered requested contents to client (in UTC). | +| HttpMethod | HTTP method used by the request: DELETE, GET, HEAD, OPTIONS, PATCH, POST, or PUT. | +| HttpVersion | The HTTP version that the client specified in the request. | +| RequestUri | The URI of the received request. This field contains the full scheme, port, domain, path, and query string. | +| HostName | The host name in the request from client. If you enable custom domains and have wildcard domain (`*.contoso.com`), the HostName log field's value is `subdomain-from-client-request.contoso.com`. If you use the Azure Front Door domain (`contoso-123.z01.azurefd.net`), the HostName log field's value is `contoso-123.z01.azurefd.net`. | +| RequestBytes | The size of the HTTP request message in bytes, including the request headers and the request body. | +| ResponseBytes | The size of the HTTP response message in bytes. | +| UserAgent | The user agent that the client used. Typically, the user agent identifies the browser type. | +| ClientIp | The IP address of the client that made the original request. If there was an `X-Forwarded-For` header in the request, then the client IP address is taken from the header. | +| SocketIp | The IP address of the direct connection to the Azure Front Door edge. If the client used an HTTP proxy or a load balancer to send the request, the value of SocketIp is the IP address of the proxy or load balancer. | +| timeTaken | The length of time from when the Azure Front Door edge received the client's request to the time that Azure Front Door sent the last byte of the response to the client, in seconds. This field doesn't take into account network latency and TCP buffering. | +| RequestProtocol | The protocol that the client specified in the request. Possible values include: **HTTP**, **HTTPS**. | +| SecurityProtocol | The TLS/SSL protocol version used by the request, or null if the request didn't use encryption. Possible values include: **SSLv3**, **TLSv1**, **TLSv1.1**, **TLSv1.2**. | +| SecurityCipher | When the value for the request protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and Azure Front Door. | +| Endpoint | The domain name of the Azure Front Door endpoint, such as `contoso-123.z01.azurefd.net`. | +| HttpStatusCode | The HTTP status code returned from Azure Front Door. If the request to the origin timed out, the value for the HttpStatusCode field is **0**. If the client closed the connection, the value for the HttpStatusCode field is **499**. | +| Pop | The Azure Front Door edge point of presence (PoP) that responded to the user request. | +| Cache Status | How the request was handled by the Azure Front Door cache. Possible values are: <ul><li>**HIT** and **REMOTE_HIT**: The HTTP request was served from the Azure Front Door cache.</li><li>**MISS**: The HTTP request was served from origin. </li><li> **PARTIAL_HIT**: Some of the bytes were served from the Front Door edge PoP cache, and other bytes were served from the origin. This status indicates an [object chunking](./front-door-caching.md#delivery-of-large-files) scenario. </li><li> **CACHE_NOCONFIG**: The request was forwarded without caching settings, including bypass scenarios. </li><li> **PRIVATE_NOSTORE**: There was no cache configured in the caching settings by the customer. </li><li> **N/A**: The request was denied by a signed URL or the Rules Engine.</li></ul> | +| MatchedRulesSetName | The names of the Rules Engine rules that were processed. | +| RouteName | The name of the route that the request matched. | +| ClientPort | The IP port of the client that made the request. | +| Referrer | The URL of the site that originated the request. | +| TimetoFirstByte | The length of time, in seconds, from when the Azure Front Door edge received the request to the time the first byte was sent to client, as measured by Azure Front Door. This property doesn't measure the client data. | +| ErrorInfo | If an error occurred during the processing of the request, this field provides detailed information about the error. Possible values are: <ul><li> **NoError**: Indicates no error was found. </li><li> **CertificateError**: Generic SSL certificate error. </li><li> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match the requested URL. </li><li> **ClientDisconnected**: The request failed because of a client network connection issue. </li><li> **ClientGeoBlocked**: The client was blocked due to the geographical location of the IP address. </li><li> **UnspecifiedClientError**: Generic client error. </li><li> **InvalidRequest**: Invalid request. This response indicates a malformed header, body, or URL. </li><li> **DNSFailure**: A failured occurred during DNS resolution. </li><li> **DNSTimeout**: The DNS query to resolve the origin IP address timed out. </li><li> **DNSNameNotResolved**: The server name or address couldn't be resolved. </li><li> **OriginConnectionAborted**: The connection with the origin was disconnected abnormally. </li><li> **OriginConnectionError**: Generic origin connection error. </li><li> **OriginConnectionRefused**: The connection with the origin wasn't established. </li><li> **OriginError**: Generic origin error. </li><li> **OriginInvalidRequest**: An invalid request was sent to the origin. </li><li> **ResponseHeaderTooBig**: The origin returned a too large of a response header. </li><li> **OriginInvalidResponse**: The origin returned an invalid or unrecognized response. </li><li> **OriginTimeout**: The timeout period for the origin request expired. </li><li> **ResponseHeaderTooBig**: The origin returned a too large of a response header. </li><li> **RestrictedIP**: The request was blocked because of restricted IP address. </li><li> **SSLHandshakeError**: Azure Front Door was unable to establish a connection with the origin because of an SSL handshake failure. </li><li> **SSLInvalidRootCA**: The root certification authority's certificate was invalid. </li><li> **SSLInvalidCipher**: The HTTPS connection was established using an invalid cipher. </li><li> **OriginConnectionAborted**: The connection with the origin was disconnected abnormally. </li><li> **OriginConnectionRefused**: The connection with the origin wasn't established. </li><li> **UnspecifiedError**: An error occurred that didn’t fit in any of the errors in the table. </li></ul> | +| OriginURL | The full URL of the origin where the request was sent. The URL is composed of the scheme, host header, port, path, and query string. <br> **URL rewrite**: If the request URL was rewritten by the Rules Engine, the path refers to the rewritten path. <br> **Cache on edge PoP**: If the request was served from the Azure Front Door cache, the origin is **N/A**. <br> **Large request**: If the requested content is large and there are multiple chunked requests going back to the origin, this field corresponds to the first request to the origin. For more information, see [Object Chunking](./front-door-caching.md#delivery-of-large-files). | +| OriginIP | The IP address of the origin that served the request. <br> **Cache on edge PoP**: If the request was served from the Azure Front Door cache, the origin is **N/A**. <br> **Large request**: If the requested content is large and there are multiple chunked requests going back to the origin, this field corresponds to the first request to the origin. For more information, see [Object Chunking](./front-door-caching.md#delivery-of-large-files). | +| OriginName| The full hostname (DNS name) of the origin. <br> **Cache on edge PoP**: If the request was served from the Azure Front Door cache, the origin is **N/A**. <br> **Large request**: If the requested content is large and there are multiple chunked requests going back to the origin, this field corresponds to the first request to the origin. For more information, see [Object Chunking](./front-door-caching.md#delivery-of-large-files). | ++## Health probe log ++Azure Front Door logs every failed health probe request. These logs can help you to diagnose problems with an origin. The logs provide you with information that you can use to investigate the failure reason and then bring the origin back to a healthy status. ++Some scenarios this log can be useful for are: ++- You noticed Azure Front Door traffic was sent to a subset of the origins. For example, you might have noticed that only three out of four origins receive traffic. You want to know if the origins are receiving and responding to health probes so you know whether the origins are healthy. +- You noticed the origin health percentage metric is lower than you expected. You want to know which origins are recorded as unhealthy and the reason for the health probe failures. ++Each health probe log entry has the following schema: ++| Property | Description | +| | | +| HealthProbeId | A unique ID to identify the health probe request. | +| Time | The date and time when the health probe was sent (in UTC). | +| HttpMethod | The HTTP method used by the health probe request. Values include **GET** and **HEAD**, based on the health probe's configuration. | +| Result | The status of health probe. The value is either **success** or a description of the error the probe received. | +| HttpStatusCode | The HTTP status code returned by the origin. | +| ProbeURL | The full target URL to where the probe request was sent. The URL is composed of the scheme, host header, path, and query string. | +| OriginName | The name of the origin that the health probe was sent to. This field helps you to locate origins of interest if origin is configured to use an FDQN. | +| POP | The edge PoP that sent the probe request. | +| Origin IP | The IP address of the origin that the health probe was sent to. | +| TotalLatency | The time from when the Azure Front Door edge sent the health probe request to the origin to when the origin sent the last response to Azure Front Door. | +| ConnectionLatency| The time spent setting up the TCP connection to send the HTTP probe request to the origin. | +| DNSResolution Latency | The time spent on DNS resolution. This field only has a value if the origin is configured to be an FDQN instead of an IP address. If the origin is configured to use an IP address, the value is **N/A**. | ++The following example JSON snippet shows a health probe log entry for a failed health probe request. ++```json +{ + "records": [ + { + "time": "2021-02-02T07:15:37.3640748Z", + "resourceId": "/SUBSCRIPTIONS/27CAFCA8-B9A4-4264-B399-45D0C9CCA1AB/RESOURCEGROUPS/AFDXPRIVATEPREVIEW/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDXPRIVATEPREVIEW-JESSIE", + "category": "FrontDoorHealthProbeLog", + "operationName": "Microsoft.Cdn/Profiles/FrontDoorHealthProbeLog/Write", + "properties": { + "healthProbeId": "9642AEA07BA64675A0A7AD214ACF746E", + "POP": "MAA", + "httpVerb": "HEAD", + "result": "OriginError", + "httpStatusCode": "400", + "probeURL": "http://afdxprivatepreview.blob.core.windows.net:80/", + "originName": "afdxprivatepreview.blob.core.windows.net", + "originIP": "52.239.224.228:80", + "totalLatencyMilliseconds": "141", + "connectionLatencyMilliseconds": "68", + "DNSLatencyMicroseconds": "1814" + } + } + ] +} +``` ++## Web application firewall log ++For more information on the Front Door web application firewall (WAF) logs, see [Azure Web Application Firewall monitoring and logging](../web-application-firewall/afds/waf-front-door-monitor.md). ++## Activity logs ++Activity logs provide information about the management operations on your Azure Front Door resources. The logs include details about each write operation that was performed on an Azure Front Door resource, including when the operation occurred, who performed it, and what the operation was. ++> [!NOTE] +> Activity logs don't include read operations. They also might not include all operations that you perform by using either the Azure portal or classic management APIs. ++For more information, see [View your activity logs](./standard-premium/how-to-logs.md#view-your-activity-logs). ++## Next steps ++To enable and store your diagnostic logs, see [Configure Azure Front Door logs](./standard-premium/how-to-logs.md). ++ When using Azure Front Door (classic), you can monitor resources in the following ways: Metrics are a feature for certain Azure resources that allow you to view perform | BackendHealthPercentage | Backend Health Percentage | Percent | Backend</br>BackendPool | The percentage of successful health probes from Front Door to backends. | | WebApplicationFirewallRequestCount | Web Application Firewall Request Count | Count | PolicyName</br>RuleName</br>Action | The number of client requests processed by the application layer security of Front Door. | -> [!NOTE] -> Activity log doesn't include any GET operations or operations that you perform by using either the Azure portal or the original Management API. -> - ## <a name="activity-log"></a>Activity logs Activity logs provide information about the operations done on an Azure Front Door (classic) profile. They also determine the what, who, and when for any write operations (put, post, or delete) done against an Azure Front Door (classic) profile. >[!NOTE]->If a request to the the origin timeout, the value for HttpStatusCode is set to **0**. +>If a request to the the origin times out, the value for HttpStatusCode is set to **0**. Access activity logs in your Front Door or all the logs of your Azure resources in Azure Monitor. To view activity logs: 1. Select your Front Door instance.-2. Select **Activity log**. ++1. Select **Activity log**. :::image type="content" source="./media/front-door-diagnostics/activity-log.png" alt-text="Activity log"::: -3. Choose a filtering scope, and then select **Apply**. +1. Choose a filtering scope, and then select **Apply**. ++> [!NOTE] +> Activity log doesn't include any GET operations or operations that you perform by using either the Azure portal or the original Management API. +> ## <a name="diagnostic-logging"></a>Diagnostic logs+ Diagnostic logs provide rich information about operations and errors that are important for auditing and troubleshooting. Diagnostic logs differ from activity logs. Activity logs provide insights into the operations done on Azure resources. Diagnostic logs provide insight into operations that your resource has done. For more information, see [Azure Monitor diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md). To configure diagnostic logs for your Azure Front Door (classic): 1. Select your Azure Front Door (classic) profile. -2. Choose **Diagnostic settings**. +1. Choose **Diagnostic settings**. -3. Select **Turn on diagnostics**. Archive diagnostic logs along with metrics to a storage account, stream them to an event hub, or send them to Azure Monitor logs. +1. Select **Turn on diagnostics**. Archive diagnostic logs along with metrics to a storage account, stream them to an event hub, or send them to Azure Monitor logs. Front Door currently provides diagnostic logs. Diagnostic logs provide individual API requests with each entry having the following schema: Front Door currently provides diagnostic logs. Diagnostic logs provide individua | ClientIp | The IP address of the client that made the request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the same. | | ClientPort | The IP port of the client that made the request. | | HttpMethod | HTTP method used by the request. |-| HttpStatusCode | The HTTP status code returned from the proxy. If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.| +| HttpStatusCode | The HTTP status code returned from the proxy. If a request to the origin times out, the value for HttpStatusCode is set to **0**.| | HttpStatusDetails | Resulting status on the request. Meaning of this string value can be found at a Status reference table. | | HttpVersion | Type of the request or connection. | | POP | Short name of the edge where the request landed. | Front Door currently provides diagnostic logs. Diagnostic logs provide individua | TimeTaken | The length of time from first byte of request into Front Door to last byte of response out, in seconds. | | TrackingReference | The unique reference string that identifies a request served by Front Door, also sent as X-Azure-Ref header to the client. Required for searching details in the access logs for a specific request. | | UserAgent | The browser type that the client used. |-| ErrorInfo | This field contains the specific type of error for further troubleshooting. </br> Possible values include: </br> **NoError**: Indicates no error was found. </br> **CertificateError**: Generic SSL certificate error.</br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. </br> **ClientDisconnected**: Request failure because of client network connection. </br> **UnspecifiedClientError**: Generic client error. </br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. </br> **DNSFailure**: DNS Failure. </br> **DNSNameNotResolved**: The server name or address couldn't be resolved. </br> **OriginConnectionAborted**: The connection with the origin was stopped abruptly. </br> **OriginConnectionError**: Generic origin connection error. </br> **OriginConnectionRefused**: The connection with the origin wasn't able to established. </br> **OriginError**: Generic origin error. </br> **OriginInvalidResponse**: Origin returned an invalid or unrecognized response. </br> **OriginTimeout**: The timeout period for origin request expired. </br> **ResponseHeaderTooBig**: The origin returned too large of a response header. </br> **RestrictedIP**: The request was blocked because of restricted IP. </br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. </br> **UnspecifiedError**: An error occurred that didn’t fit in any of the errors in the table. </br> **SSLMismatchedSNI**:The request was invalid because the HTTP message header did not match the value presented in the TLS SNI extension during SSL/TLS connection setup.| +| ErrorInfo | This field contains the specific type of error for further troubleshooting. </br> Possible values include: </br> **NoError**: Indicates no error was found. </br> **CertificateError**: Generic SSL certificate error.</br> **CertificateNameCheckFailed**: The host name in the SSL certificate is invalid or doesn't match. </br> **ClientDisconnected**: Request failure because of client network connection. </br> **UnspecifiedClientError**: Generic client error. </br> **InvalidRequest**: Invalid request. It might occur because of malformed header, body, and URL. </br> **DNSFailure**: DNS Failure. </br> **DNSNameNotResolved**: The server name or address couldn't be resolved. </br> **OriginConnectionAborted**: The connection with the origin was stopped abruptly. </br> **OriginConnectionError**: Generic origin connection error. </br> **OriginConnectionRefused**: The connection with the origin wasn't able to established. </br> **OriginError**: Generic origin error. </br> **OriginInvalidResponse**: Origin returned an invalid or unrecognized response. </br> **OriginTimeout**: The timeout period for origin request expired. </br> **ResponseHeaderTooBig**: The origin returned too large of a response header. </br> **RestrictedIP**: The request was blocked because of restricted IP. </br> **SSLHandshakeError**: Unable to establish connection with origin because of SSL hand shake failure. </br> **UnspecifiedError**: An error occurred that didn’t fit in any of the errors in the table. </br> **SSLMismatchedSNI**:The request was invalid because the HTTP message header didn't match the value presented in the TLS SNI extension during SSL/TLS connection setup.| ### Sent to origin shield deprecation+ The raw log property **isSentToOriginShield** has been deprecated and replaced by a new field **isReceivedFromClient**. Use the new field if you're already using the deprecated field. Raw logs include logs generated from both CDN edge (child POP) and origin shield. Origin shield refers to parent nodes that are strategically located across the globe. These nodes communicate with origin servers and reduce the traffic load on origin. -For every request that goes to origin shield, there are 2-log entries: +For every request that goes to an origin shield, there are two log entries: * One for edge nodes * One for origin shield. If the value is false, then it means the request is responded from origin shield | where Category == "FrontdoorAccessLog" and isReceivedFromClient_b == true` > [!NOTE]-> For various routing configurations and traffic behaviors, some of the fields like backendHostname, cacheStatus, isReceivedFromClient, and POP field may respond with different values. The below table explains the different values these fields will have for various scenarios: +> For various routing configurations and traffic behaviors, some of the fields like backendHostname, cacheStatus, isReceivedFromClient, and POP field may respond with different values. The following table explains the different values these fields will have for various scenarios: | Scenarios | Count of log entries | POP | BackendHostname | isReceivedFromClient | CacheStatus | | - | - | - | - | - | - | After the chunk arrives at the Azure Front Door edge, it's cached and immediatel - Learn how to [create an Azure Front Door (classic) profile](quickstart-create-front-door.md) - Learn [how Azure Front Door (classic) works](front-door-routing-architecture.md)+ |
frontdoor | Front Door Rules Engine Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-rules-engine-actions.md | Use these settings to control how files get cached for requests that contain que | Cache behavior | Description | | -- | |-| Ignore query strings | Once the asset is cached, all ensuing requests ignore the query strings until the cached asset expires. | -| Cache every unique URL | Each request with a unique URL, including the query string, is treated as a unique asset with its own cache. | -| Ignore specified query strings | Request URL query strings listed in "Query parameters" setting are ignored for caching. | -| Include specified query strings | Request URL query strings listed in "Query parameters" setting are used for caching. | +| Ignore Query String | Once the asset is cached, all ensuing requests ignore the query strings until the cached asset expires. | +| Use Query String | Each request with a unique URL, including the query string, is treated as a unique asset with its own cache. | +| Ignore Specified Query Strings | Request URL query strings listed in "Query parameters" setting are ignored for caching. | +| Include Specified Query Strings | Request URL query strings listed in "Query parameters" setting are used for caching. | | Additional fields | Description | |
frontdoor | Scenario Storage Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/scenario-storage-blobs.md | As a content delivery network (CDN), Front Door caches the content at its global #### Authentication -Front Door is designed to be internet-facing, and this scenario is optimized for publicly available blobs. If you need to authenticate access to blobs, consider using [shared access signatures](../storage/common/storage-sas-overview.md), and ensure that you enable the [*Cache every unique URL* query string behavior](front-door-caching.md#query-string-behavior) to avoid Front Door from serving requests to unauthenticated clients. However, this approach might not make effective use of the Front Door cache, because each request with a different shared access signature must be sent to the origin separately. +Front Door is designed to be internet-facing, and this scenario is optimized for publicly available blobs. If you need to authenticate access to blobs, consider using [shared access signatures](../storage/common/storage-sas-overview.md), and ensure that you enable the [*Use Query String* query string behavior](front-door-caching.md#query-string-behavior) to avoid Front Door from serving requests to unauthenticated clients. However, this approach might not make effective use of the Front Door cache, because each request with a different shared access signature must be sent to the origin separately. #### Origin security |
frontdoor | How To Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-logs.md | Title: 'Logs - Azure Front Door' -description: This article explains how Azure Front Door tracks and monitor your environment with logs. +description: This article explains how to configure Azure Front Door logs. Previously updated : 01/16/2023 Last updated : 02/23/2023 -# Azure Front Door logs +# Configure Azure Front Door logs -Azure Front Door provides different logging to help you track, monitor, and debug your Front Door. +Azure Front Door captures several types of logs. Logs can help you monitor your application, track requests, and debug your Front Door configuration. For more information about Azure Front Door's logs, see [Monitor metrics and logs in Azure Front Door](../front-door-diagnostics.md). -* Access logs have detailed information about every request that AFD receives and help you analyze and monitor access patterns, and debug issues. -* Activity logs provide visibility into the operations done on Azure resources. -* Health probe logs provide the logs for every failed probe to your origin. -* Web Application Firewall (WAF) logs provide detailed information of requests that gets logged through either detection or prevention mode of an Azure Front Door endpoint. A custom domain that gets configured with WAF can also be viewed through these logs. For more information on WAF logs, see [Azure Web Application Firewall monitoring and logging](../../web-application-firewall/afds/waf-front-door-monitor.md#waf-logs). --Access logs, health probe logs and WAF logs aren't enabled by default. Use the steps below to enable logging. Activity log entries are collected by default, and you can view them in the Azure portal. Logs can have delays up to a few minutes. --You have three options for storing your logs: --* **Storage account:** Storage accounts are best used for scenarios when logs are stored for a longer duration and reviewed when needed. -* **Event hubs:** Event hubs are a great option for integrating with other security information and event management (SIEM) tools or external data stores. For example: Splunk/DataDog/Sumo. -* **Azure Log Analytics:** Azure Log Analytics in Azure Monitor is best used for general real-time monitoring and analysis of Azure Front Door performance. +Access logs, health probe logs, and WAF logs aren't enabled by default. In this article, you'll learn how to enable diagnostic logs for your Azure Front Door profile. ## Configure logs You have three options for storing your logs: 1. Select the **Destination details**. Destination options are: * **Send to Log Analytics**- * Select the *Subscription* and *Log Analytics workspace*. + * Azure Log Analytics in Azure Monitor is best used for general real-time monitoring and analysis of Azure Front Door performance. + * Select the *Subscription* and *Log Analytics workspace*. * **Archive to a storage account**- * Select the *Subscription* and the *Storage Account*. and set **Retention (days)**. + * Storage accounts are best used for scenarios when logs are stored for a longer duration and are reviewed when needed. + * Select the *Subscription* and the *Storage Account*. and set **Retention (days)**. * **Stream to an event hub**- * Select the *Subscription, Event hub namespace, Event hub name (optional)*, and *Event hub policy name*. + * Event hubs are a great option for integrating with other security information and event management (SIEM) tools or external data stores, such as Splunk, DataDog, or Sumo. + * Select the *Subscription, Event hub namespace, Event hub name (optional)*, and *Event hub policy name*. ++ > [!TIP] + > Most Azure customers use Log Analytics. :::image type="content" source="../media/how-to-logging/front-door-logging-2.png" alt-text="Screenshot of diagnostic settings page."::: 1. Click on **Save**. -## Access log --Azure Front Door currently provides individual API requests with each entry having the following schema and logged in JSON format as shown below. --| Property | Description | -|-|-| -| TrackingReference | The unique reference string that identifies a request served by AFD, also sent as X-Azure-Ref header to the client. Required for searching details in the access logs for a specific request. | -| Time | The date and time when the AFD edge delivered requested contents to client (in UTC). | -| HttpMethod | HTTP method used by the request: DELETE, GET, HEAD, OPTIONS, PATCH, POST, or PUT. | -| HttpVersion | The HTTP version that the viewer specified in the request. | -| RequestUri | URI of the received request. This field is a full scheme, port, domain, path, and query string | -| HostName | The host name in the request from client. If you enable custom domains and have wildcard domain (*.contoso.com), hostname is a.contoso.com. if you use Azure Front Door domain (contoso.azurefd.net), hostname is contoso.azurefd.net. | -| RequestBytes | The size of the HTTP request message in bytes, including the request headers and the request body. The number of bytes of data that the viewer included in the request, including headers. | -| ResponseBytes | Bytes sent by the backend server as the response. | -| UserAgent | The browser type that the client used. | -| ClientIp | The IP address of the client that made the original request. If there was an X-Forwarded-For header in the request, then the Client IP is picked from the same. | -| SocketIp | The IP address of the direct connection to AFD edge. If the client used an HTTP proxy or a load balancer to send the request, the value of SocketIp is the IP address of the proxy or load balancer. | -| timeTaken | The length of time from the time AFD edge server receives a client's request to the time that AFD sends the last byte of response to client, in seconds. This field doesn't take into account network latency and TCP buffering. | -| RequestProtocol | The protocol that the client specified in the request: HTTP, HTTPS. | -| SecurityProtocol | The TLS/SSL protocol version used by the request or null if no encryption. Possible values include: SSLv3, TLSv1, TLSv1.1, TLSv1.2 | -| SecurityCipher | When the value for Request Protocol is HTTPS, this field indicates the TLS/SSL cipher negotiated by the client and AFD for encryption. | -| Endpoint | The domain name of AFD endpoint, for example, contoso.z01.azurefd.net | -| HttpStatusCode | The HTTP status code returned from Azure Front Door. If a request to the origin times out, the value for HttpStatusCode is set to **0**.| -| Pop | The edge pop, which responded to the user request. | -| Cache Status | Provides the status code of how the request gets handled by the CDN service when it comes to caching. Possible values are:<ul><li>`HIT` and `REMOTE_HIT`: The HTTP request was served from the Front Door cache.</li><li>`MISS`: The HTTP request was served from the origin.</li><li> `PARTIAL_HIT`: Some of the bytes from a request were served from the Front Door cache, and some of the bytes were served from origin. This status occurs in [object chunking](../front-door-caching.md#delivery-of-large-files) scenarios.</li><li>`CACHE_NOCONFIG`: Request was forwarded without caching settings, including bypass scenario.</li><li>`PRIVATE_NOSTORE`: No cache configured in caching settings by customers.</li><li>`N/A`: The request was denied by a signed URL or the rules engine.</li></ul> | -| MatchedRulesSetName | The names of the rules that were processed. | -| RouteName | The name of the route that the request matched. | -| ClientPort | The IP port of the client that made the request. | -| Referrer | The URL of the site that originated the request. | -| TimeToFirstByte | The length of time in seconds from AFD receives the request to the time the first byte gets sent to client, as measured on Azure Front Door. This property doesn't measure the client data. | -| ErrorInfo | This field provides detailed info of the error token for each response. Possible values are:<ul><li>`NoError`: Indicates no error was found.</li><li>`CertificateError`: Generic SSL certificate error.</li><li>`CertificateNameCheckFailed`: The host name in the SSL certificate is invalid or doesn't match.</li><li>`ClientDisconnected`: Request failure because of client network connection.</li><li>`ClientGeoBlocked`: The client was blocked due geographical location of the IP.</li><li>`UnspecifiedClientError`: Generic client error.</li><li>`InvalidRequest`: Invalid request. It might occur because of malformed header, body, and URL.</li><li>`DNSFailure`: DNS Failure.</li><li>`DNSTimeout`: The DNS query to resolve the backend timed out.</li><li>`DNSNameNotResolved`: The server name or address couldn't be resolved.</li><li>`OriginConnectionAborted`: The connection with the origin was disconnected abnormally.</li><li>`OriginConnectionError`: Generic origin connection error.</li><li>`OriginConnectionRefused`: The connection with the origin wasn't established.</li><li>`OriginError`: Generic origin error.</li><li>`OriginInvalidRequest`: An invalid request was sent to the origin.</li><li>`ResponseHeaderTooBig`: The origin returned a too large of a response header.</li><li>`OriginInvalidResponse`:` Origin returned an invalid or unrecognized response.</li><li>`OriginTimeout`: The timeout period for origin request expired.</li><li>`ResponseHeaderTooBig`: The origin returned a too large of a response header.</li><li>`RestrictedIP`: The request was blocked because of restricted IP.</li><li>`SSLHandshakeError`: Unable to establish connection with origin because of SSL hand shake failure.</li><li>`SSLInvalidRootCA`: The RootCA was invalid.</li><li>`SSLInvalidCipher`: Cipher was invalid for which the HTTPS connection was established.</li><li>`OriginConnectionAborted`: The connection with the origin was disconnected abnormally.</li><li>`OriginConnectionRefused`: The connection with the origin wasn't established.</li><li>`UnspecifiedError`: An error occurred that didn’t fit in any of the errors in the table.</li></ul> | -| OriginURL | The full URL of the origin where requests are being sent. Composed of the scheme, host header, port, path, and query string. <br> **URL rewrite**: If there's a URL rewrite rule in Rule Set, path refers to rewritten path. <br> **Cache on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files). | -| OriginIP | The origin IP that served the request. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files) | -| OriginName| The full DNS name (hostname in origin URL) to the origin. <br> **Cache hit on edge POP** If it's a cache hit on edge POP, the origin is N/A. <br> **Large request** If the requested content is large with multiple chunked requests going back to the origin, this field will correspond to the first request to the origin. For more information, see [Caching with Azure Front Door](../front-door-caching.md#delivery-of-large-files) | --## Health Probe Log --Health probe logs provide logging for every failed probe to help you diagnose your origin. The logs will provide you information that you can use to bring the origin back to service. Some scenarios this log can be useful for are: --* You noticed Azure Front Door traffic was sent to some of the origins. For example, only three out of four origins receiving traffic. You want to know if the origins are receiving probes and if not the reason for the failure.  --* You noticed the origin health % is lower than expected and want to know which origin failed and the reason of the failure. --### Health probe log properties --Each health probe log has the following schema. --| Property | Description | -| | | -| HealthProbeId | A unique ID to identify the request. | -| Time | Probe complete time | -| HttpMethod | HTTP method used by the health probe request. Values include GET and HEAD, based on health probe configurations. | -| Result | Status of health probe to origin, value includes success, and other error text. | -| HttpStatusCode | The HTTP status code returned from the origin. | -| ProbeURL (target) | The full URL of the origin where requests are being sent. Composed of the scheme, host header, path, and query string. | -| OriginName | The origin where requests are being sent. This field helps locate origins of interest if origin is configured to FDQN. | -| POP | The edge pop, which sent out the probe request. | -| Origin IP | Target origin IP. This field is useful in locating origins of interest if you configure origin using FDQN. | -| TotalLatency | The time from AFDX edge sends the request to origin to the time origin sends the last response to AFDX edge. | -| ConnectionLatency| Duration Time spent on setting up the TCP connection to send the HTTP Probe request to origin. | -| DNSResolution Latency | Duration Time spent on DNS resolution if the origin is configured to be an FDQN instead of IP. N/A if the origin is configured to IP. | --The following example shows a health probe log entry, in JSON format. --```json -{ - "records": [ - { - "time": "2021-02-02T07:15:37.3640748Z", - "resourceId": "/SUBSCRIPTIONS/27CAFCA8-B9A4-4264-B399-45D0C9CCA1AB/RESOURCEGROUPS/AFDXPRIVATEPREVIEW/PROVIDERS/MICROSOFT.CDN/PROFILES/AFDXPRIVATEPREVIEW-JESSIE", - "category": "FrontDoorHealthProbeLog", - "operationName": "Microsoft.Cdn/Profiles/FrontDoorHealthProbeLog/Write", - "properties": { - "healthProbeId": "9642AEA07BA64675A0A7AD214ACF746E", - "POP": "MAA", - "httpVerb": "HEAD", - "result": "OriginError", - "httpStatusCode": "400", - "probeURL": "http://afdxprivatepreview.blob.core.windows.net:80/", - "originName": "afdxprivatepreview.blob.core.windows.net", - "originIP": "52.239.224.228:80", - "totalLatencyMilliseconds": "141", - "connectionLatencyMilliseconds": "68", - "DNSLatencyMicroseconds": "1814" - } - } - ] -} -``` --## Activity logs --Activity logs provide information about the operations done on Azure Front Door Standard/Premium. The logs include details about what, who and when a write operation was done on Azure Front Door. --> [!NOTE] -> Activity logs don't include GET operations. They also don't include operations that you perform by using either the Azure portal or the original Management API. --Access activity logs in your Front Door or all the logs of your Azure resources in Azure Monitor. +## View your activity logs To view activity logs: |
frontdoor | How To Monitor Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-monitor-metrics.md | -# Real-time Monitoring in Azure Front Door +# Real-time monitoring in Azure Front Door -Azure Front Door is integrated with Azure Monitor and has 11 metrics to help monitor Azure Front Door in real-time to track, troubleshoot, and debug issues. +Azure Front Door is integrated with Azure Monitor. You can use metrics in real time to measure traffic to your application, and to track, troubleshoot, and debug issues. -Azure Front Door measures and sends its metrics in 60-second intervals. The metrics can take up to 3 mins to appear in the portal. Metrics can be displayed in charts or grid of your choice and are accessible via portal, PowerShell, CLI, and API. For more information, seeΓÇ»[Azure Monitor metrics](../../azure-monitor/essentials/data-platform-metrics.md). +You can also configure alerts for each metric such as a threshold for 4XXErrorRate or 5XXErrorRate. When the error rate exceeds the threshold, it will trigger an alert as configured. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/alerts/alerts-metric.md). -The default metrics are free of charge. You can enable additional metrics for an extra cost. +## Access metrics in the Azure portal -You can configure alerts for each metric such as a threshold for 4XXErrorRate or 5XXErrorRate. When the error rate exceeds the threshold, it will trigger an alert as configured. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/alerts/alerts-metric.md). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile. -## Metrics supported in Azure Front Door +1. Under **Monitoring**, select **Metrics**. -| Metrics | Description | Dimensions | -| - | - | - | -| Bytes Hit ratio | The percentage of egress from AFD cache, computed against the total egress.ΓÇ»</br> **Byte Hit Ratio** = (egress from edge - egress from origin)/egress from edge. </br> **Scenarios excluded in bytes hit ratio calculation**:</br> 1. You explicitly configure no cache either through Rules Engine or Query String caching behavior. </br> 2. You explicitly configure cache-control directive with no-store or private cache. </br>3. Byte hit ratio can be low if most of the traffic is forwarded to origin rather than served from caching based on your configurations or scenarios. | Endpoint | -| RequestCount | The number of client requests served by CDN. | Endpoint, client country, client region, HTTP status, HTTP status group | -| ResponseSize | The number of bytes sent as responses from Front Door to clients. |Endpoint, client country, client region, HTTP status, HTTP status group | -| TotalLatency | The total time from the client request received by CDN **until the last response byte send from CDN to client**. |Endpoint, client country, client region, HTTP status, HTTP status group | -| RequestSize | The number of bytes sent as requests from clients to AFD. | Endpoint, client country, client region, HTTP status, HTTP status group | -| 4XX % ErrorRate | The percentage of all the client requests for which the response status code is 4XX. | Endpoint, Client Country, Client Region | -| 5XX % ErrorRate | The percentage of all the client requests for which the response status code is 5XX. | Endpoint, Client Country, Client Region | -| OriginRequestCount | The number of requests sent from AFD to origin | Endpoint, Origin, HTTP status, HTTP status group | -| OriginLatency | The time calculated from when the request was sent by AFD edge to the backend until AFD received the last response byte from the backend. | Endpoint, Origin | -| OriginHealth% | The percentage of successful health probes from AFD to origin.| Origin, Origin Group | -| WAF request count | Matched WAF request. | Action, rule name, Policy Name | --> [!NOTE] -> If a request to the the origin timeout, the value for HttpStatusCode dimension will be **0**. -> ---## Access Metrics in Azure portal --1. From the Azure portal menu, select **All Resources** >> **\<your-AFD-profile>**. --2. Under **Monitoring**, select **Metrics**: --3. In **Metrics**, select the metric to add: +1. In **Metrics**, select the metric to add: :::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-1.png" alt-text="Screenshot of metrics page." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-1-expanded.png"::: -4. Select **Add filter** to add a filter: +1. Select **Add filter** to add a filter: :::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-2.png" alt-text="Screenshot of adding filters to metrics." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-2-expanded.png"::: -5. Select **Apply splitting** to split data by different dimensions: +1. Select **Apply splitting** to split data by different dimensions: :::image type="content" source="../media/how-to-monitoring-metrics/front-door-metrics-4.png" alt-text="Screenshot of adding dimensions to metrics." lightbox="../media/how-to-monitoring-metrics/front-door-metrics-4-expanded.png"::: -6. Select **New chart** to add a new chart: +1. Select **New chart** to add a new chart: ++## Configure alerts in the Azure portal -## Configure Alerts in Azure portal +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Front Door Standard/Premium profile. -1. Set up alerts on Azure Front Door Standard/Premium (Preview) by selecting **Monitoring** >> **Alerts**. +1. Under **Monitoring**, select **Alerts**. 1. Select **New alert rule** for metrics listed in Metrics section. |
frontdoor | How To Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-reports.md | -Azure Front Door analytics reports provide a built-in and all-around view of how your Azure Front Door behaves along with associated Web Application Firewall metrics. You can also take advantage of Access Logs to do further troubleshooting and debugging. Azure Front Door Analytics reports include traffic reports and security reports. +Azure Front Door analytics reports provide a built-in, all-around view of how your Azure Front Door profile behaves, along with associated web application firewall (WAF) metrics. You can also take advantage of [Azure Front Door's logs](../front-door-diagnostics.md?pivot=front-door-standard-premium) to do further troubleshooting and debugging. -| Reports | Details | +The built-in reports include information about your traffic and your application's security. Azure Front Door provides traffic reports and security reports. ++| Traffic report | Details | |||-| Overview of key metrics | Shows overall data that got sent from Azure Front Door edges to clients<br/>- Peak bandwidth<br/>- Requests <br/>- Cache hit ratio<br/> - Total latency<br/>- 5XX error rate | -| Traffic by Domain | - Provides an overview of all the domains under the profile<br/>- Breakdown of data transferred out from AFD edge to client<br/>- Total requests<br/>- 3XX/4XX/5XX response code by domains | -| Traffic by Location | - Shows a map view of request and usage by top countries/regions<br/>- Trend view of top countries/regions | -| Usage | - Displays data transfer out from Azure Front Door edge to clients<br/>- Data transfer out from origin to AFD edge<br/>- Bandwidth from AFD edge to clients<br/>- Bandwidth from origin to AFD edge<br/>- Requests<br/>- Total latency<br/>- Request count trend by HTTP status code | -| Caching | - Shows cache hit ratio by request count<br/>- Trend view of hit and miss requests | -| Top URL | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the most requested 50 assets. | -| Top Referrer | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the top 50 referrers that generate traffic. | -| Top User Agent | - Shows request count <br/>- Data transferred <br/>- Cache hit ratio <br/>- Response status code distribution for the top 50 user agents that were used to request content. | --| Security reports | Details | +| [Key metrics in all reports](#key-metrics-included-in-all-reports) | Shows overall data that were sent from Azure Front Door edge points of presence (PoPs) to clients, including:<ul><li>Peak bandwidth</li><li>Requests</li><li>Cache hit ratio</li><li>Total latency</li><li>5XX error rate</li></ul> | +| [Traffic by domain](#traffic-by-domain-report) | Provides an overview of all the domains within your Azure Front Door profile:<ul><li>Breakdown of data transferred out from the Azure Front Door edge to the client</li><li>Total requests</li><li>3XX/4XX/5XX response code by domains</li></ul> | +| [Traffic by location](#traffic-by-location-report) | <ul><li>Shows a map view of request and usage by top countries/regions<br/></li><li>Trend view of top countries/regions</li></ul> | +| [Usage](#usage-report) | <ul><li>Data transfer out from Azure Front Door edge to clients<br/></li><li>Data transfer out from origin to Azure Front Door edge<br/></li><li>Bandwidth from Azure Front Door edge to clients<br/></li><li>Bandwidth from origin to Azure Front Door edge<br/></li><li>Requests<br/></li><li>Total latency<br/></li><li>Request count trend by HTTP status code</li></ul> | +| [Caching](#caching-report) | <ul><li>Shows cache hit ratio by request count<br/></li><li>Trend view of hit and miss requests</li></ul> | +| [Top URL](#top-url-report) | <ul><li>Shows request count <br/></li><li>Data transferred <br/></li><li>Cache hit ratio <br/></li><li>Response status code distribution for the most requested 50 assets</li></ul> | +| [Top referrer](#top-referrer-report) | <ul><li>Shows request count <br/></li><li>Data transferred <br/></li><li>Cache hit ratio <br/></li><li>Response status code distribution for the top 50 referrers that generate traffic</li></ul> | +| [Top user agent](#top-user-agent-report) | <ul><li>Shows request count <br/></li><li>Data transferred <br/></li><li>Cache hit ratio <br/></li><li>Response status code distribution for the top 50 user agents that were used to request content</li></ul> | ++| Security report | Details | |||-| Overview of key metrics | - Shows matched WAF rules<br/>- Matched OWASP rules<br/>- Matched BOT rules<br/>- Matched custom rules | -| Metrics by dimensions | - Breakdown of matched WAF rules trend by action<br/>- Doughnut chart of events by Rule Set Type and event by rule group<br/>- Break down list of top events by rule ID, countries/regions, IP address, URL, and user agent | +| Overview of key metrics | <ul><li>Shows matched WAF rules<br/></li><li>Matched OWASP rules<br/></li><li>Matched bot protection rules<br/></li><li>Matched custom rules</li></ul> | +| Metrics by dimensions | <ul><li>Breakdown of matched WAF rules trend by action<br/></li><li>Doughnut chart of events by Rule Set Type and event by rule group<br/></li><li>Break down list of top events by rule ID, countries/regions, IP address, URL, and user agent</li></ul> | > [!NOTE]-> Security reports is only available with Azure Front Door Premium tier. +> Security reports are only available when you use the Azure Front Door premium tier. ++Reports are free of charge. Most reports are based on access log data, but you don't need to enable access logs or make any configuration changes to use the reports. ++## How to access reports -Most of the reports are based on access logs and are offered free of charge to customers on Azure Front Door. Customer doesnΓÇÖt have to enable access logs or do any configuration to view these reports. Reports are accessible through portal and API. CSV download is also supported. +Reports are accessible through the Azure portal and through the Azure Resource Manager API. You can also [download reports as comma-separated values (CSV) files](#export-reports-in-csv-format). Reports support any selected date range from the previous 90 days. With data points of every 5 mins, every hour, or every day based on the date range selected. Normally, you can view data with delay of within an hour and occasionally with delay of up to a few hours. -## Access Reports using the Azure portal +### Access reports by using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com) and select your Azure Front Door Standard/Premium profile. -1. In the navigation pane, select **Reports or Security** under *Analytics*. +1. In the navigation pane, select **Reports** or **Security** under *Analytics*. :::image type="content" source="../media/how-to-reports/front-door-reports-landing-page.png" alt-text="Screenshot of Reports landing page"::: -1. There are seven tabs for different dimensions, select the dimension of interest. +1. Select the report you want to view. * Traffic by domain * Usage Reports support any selected date range from the previous 90 days. With data poi * Top referrer * Top user agent -1. After choosing the dimension, you can select different filters. +1. After choosing the report, you can select different filters. - 1. **Show data for** - Select the date range for which you want to view traffic by domain. Available ranges are: + - **Show data for:** Select the date range for which you want to view traffic by domain. Available ranges are: * Last 24 hours * Last 7 days Reports support any selected date range from the previous 90 days. With data poi * Last month * Custom date - By default, data is shown for last seven days. For tabs with line charts, the data granularity goes with the date ranges you selected as the default behavior. + By default, data is shown for the last seven days. For reports with line charts, the data granularity goes with the date ranges you selected as the default behavior. - * 5 minutes - one data point every 5 minutes for date ranges less than or equal 24 hours. - * By hour ΓÇô one data every hour for date ranges between 24 hours to 30 days - * By day ΓÇô one data per day for date ranges bigger than 30 days. + * 5 minutes - one data point every 5 minutes for date ranges less than or equal to 24 hours. This granularity level can be used for date ranges that are 14 days or shorter. + * By hour ΓÇô one data point every hour for date ranges between 24 hours and 30 days. + * By day ΓÇô one data point per day for date ranges longer than 30 days. - You can always use Aggregation to change the default aggregation granularity. Note: 5 minutes doesnΓÇÖt work for data range longer than 14 days. + Select **Aggregation** to change the default aggregation granularity. - 1. **Location** - Select single or multiple client locations by countries/regions. Countries/regions are grouped into six regions: North America, Asia, Europe, Africa, Oceania, and South America. Refer to [countries/regions mapping](https://en.wikipedia.org/wiki/Subregion). By default, all countries are selected. + - **Location:** Select one or more countries/regions to filter by the client locations. Countries/regions are grouped into six regions: North America, Asia, Europe, Africa, Oceania, and South America. Refer to [countries/regions mapping](https://en.wikipedia.org/wiki/Subregion). By default, all countries are selected. :::image type="content" source="../media/how-to-reports/front-door-reports-dimension-locations.png" alt-text="Screenshot of Reports for location dimension."::: - 1. **Protocol** - Select either HTTP or HTTPS to view traffic data. + - **Protocol:** Select either HTTP or HTTPS to view traffic data for the selected protocol. :::image type="content" source="../media/how-to-reports/front-door-reports-dimension-protocol.png" alt-text="Screenshot of Reports for protocol dimension."::: - 1. **Domains** - Select single or multi Endpoints or Custom Domains. By default, all endpoints and custom domains are selected. + - **Domains** - Select one or more endpoints or custom domains. By default, all endpoints and custom domains are selected. - * If you delete an endpoint or a custom domain in one profile and then recreate the same endpoint or domain in another profile. The endpoint will be considered a second endpoint. - * If you're viewing reports by custom domain - when you delete one custom domain and bind it to a different endpoint. They'll be treated as one custom domain. If view by endpoint - they'll be treated as separate items. + * If you delete an endpoint or a custom domain in one profile and then recreate the same endpoint or domain in another profile, the report counts the new endpoint as a second endpoint. + * If you delete a custom domain and bind it to a different endpoint, the behavior depends on how you view the report. If you view the report by custom domain then they'll be treated as one custom domain. If you view the report by endpoint, they'll be treated as separate items. :::image type="content" source="../media/how-to-reports/front-door-reports-dimension-domain.png" alt-text="Screenshot of Reports for domain dimension."::: Reports support any selected date range from the previous 90 days. With data poi :::image type="content" source="../media/how-to-reports/front-door-reports-download-csv.png" alt-text="Screenshot of download csv file for Reports."::: -### Key metrics for all reports +### Export reports in CSV format -| Metric | Description | +You can download any of the Azure Front Door reports as a CSV file. Every CSV report includes some general information and the information is available in all CSV files: ++| Value | Description | |||-| Data Transferred | Shows data transferred from AFD edge POPs to client for the selected time frame, client locations, domains, and protocols. | -| Peak Bandwidth | Peak bandwidth usage in bits per seconds from Azure Front Door edge POPs to client for the selected time frame, client locations, domains, and protocols. | -| Total Requests | The number of requests that AFD edge POPs responded to client for the selected time frame, client locations, domains, and protocols. | -| Cache Hit Ratio | The percentage of all the cacheable requests for which AFD served the contents from its edge caches for the selected time frame, client locations, domains, and protocols. | -| 5XX Error Rate | The percentage of requests for which the HTTP status code to client was a 5XX for the selected time frame, client locations, domains, and protocols. | -| Total Latency | Average latency of all the requests for the selected time frame, client locations, domains, and protocols. The latency for each request is measured as the total time of when the client request gets received by Azure Front Door until the last response byte sent from Azure Front Door to client. | +| Report | The name of the report. | +| Domains | The list of the endpoints or custom domains for the report. | +| StartDateUTC | The start of the date range for which you generated the report, in Coordinated Universal Time (UTC). | +| EndDateUTC | The end of the date range for which you generated the report, in Coordinated Universal Time (UTC). | +| GeneratedTimeUTC | The date and time when you generated the report, in Coordinated Universal Time (UTC). | +| Location | The list of the countries/regions where the client requests originated. The value is **All** by default. Not applicable to the *Security* report. | +| Protocol | The protocol of the request, which is either HTTP or HTTPS. Not applicable to *Top URL*, *Traffic by user agent*, and *Security* reports. | +| Aggregation | The granularity of data aggregation in each row, every 5 minutes, every hour, and every day. Not applicable to *Traffic by domain*, *Top URL*, *Traffic by user agent* reports, and *Security* reports. | -## Traffic by Domain +Each report also includes its own variables. Select a report to view the variables that the report includes. -Traffic by Domain provides a grid view of all the domains under this Azure Front Door profile. In this report you can view: -* Requests -* Data transferred out from Azure Front Door to client -* Requests with status code (3XX, 4Xx and 5XX) of each domain +# [Traffic by domain](#tab/traffic-by-domain) -Domains include Endpoint and Custom Domains, as explained in the Accessing Report session. +The *Traffic by domain* report includes these fields: -You can go to other tabs to investigate further or view access log for more information if you find the metrics below your expectation. +* Domain +* Total Request +* Cache Hit Ratio +* 3XX Requests +* 4XX Requests +* 5XX Requests +* ByteTransferredFromEdgeToClient +# [Traffic by location](#tab/traffic-by-location) +The *Traffic by location* report includes these fields: -## Usage +* Location +* TotalRequests +* Request% +* BytesTransferredFromEdgeToClient -This report shows the trends of traffic and response status code by different dimensions, including: +# [Usage](#tab/usage) -* Data Transferred from edge to client and from origin to edge in line chart. +There are three reports in the usage report's CSV file: one for HTTP protocol, one for HTTPS protocol, and one for HTTP status codes. -* Data Transferred from edge to client by protocol in line chart. +The *Usage* report's HTTP and HTTPS data sets include these fields: -* Number of requests from edge to clients in line chart. +* Time +* Protocol +* DataTransferred(bytes) +* TotalRequest +* bpsFromEdgeToClient +* 2XXRequest +* 3XXRequest +* 4XXRequest +* 5XXRequest -* Number of requests from edge to clients by protocol, HTTP and HTTPS, in line chart. +The *Usage* report's HTTP status codes data set include these fields: -* Bandwidth from edge to client in line chart. +* Time +* DataTransferred(bytes) +* TotalRequest +* bpsFromEdgeToClient +* 2XXRequest +* 3XXRequest +* 4XXRequest +* 5XXRequest -* Total latency, which measures the total time from the client request received by Front Door until the last response byte sent from Front Door to client. +# [Caching](#tab/caching) -* Number of requests from edge to clients by HTTP status code, in line chart. Every request generates an HTTP status code. HTTP status code appears in HTTPStatusCode in Raw Log. The status code describes how CDN edge handled the request. For example, a 2xx status code indicates that the request got successfully served to a client. While a 4xx status code indicates that an error occurred. For more information about HTTP status codes, see List of HTTP status codes. +The *Caching* report includes these fields: -* Number of requests from the edge to clients by HTTP status code. Percentage of requests by HTTP status code among all requests in grid. +* Time +* CacheHitRatio +* HitRequests +* MissRequests +# [Top URL](#tab/top-url) -## Traffic by Location +The *Top URL* report includes these fields: -This report displays the top 50 locations by the countries/regions of the visitors that access your asset the most. The report also provides a breakdown of metrics by countries/regions and gives you an overall view of countries/regions - where the most traffic gets generated. Lastly you can see which countries/regions is having higher cache hit ratio or 4XX/5XX error codes. +* URL +* TotalRequests +* Request% +* DataTransferred(bytes) +* DataTransferred% +# [Top user agent](#tab/topuser-agent) -The following are included in the reports: +The *Top user agent* report includes these fields: -* A world map view of the top 50 countries/regions by data transferred out or requests of your choice. -* Two line charts trend view of the top five countries/regions by data transferred out and requests of your choice. -* A grid of the top countries/regions with corresponding data transferred out from AFD to clients, data transferred out % of all countries/regions, requests, request % among all countries/regions, cache hit ratio, 4XX response code and 5XX response code. +* UserAgent +* TotalRequests +* Request% +* DataTransferred(bytes) +* DataTransferred% -## Caching +# [Security](#tab/security) -Caching reports provides a chart view of cache hits/misses and cache hit ratio based on requests. These key metrics explain how CDN is caching contents since the fastest performance results from cache hits. You can optimize data delivery speeds by minimizing cache misses. This report includes: +The *Security* report includes seven tables: -* Cache hit and miss count trend, in line chart. +* Time +* Rule ID +* Countries/regions +* IP address +* URL +* Hostname +* User agent -* Cache hit ratio in line chart. +All of the tables in the *Security* report include the following fields: -Cache Hits/Misses describe the request number cache hits and cache misses for client requests. +* BlockedRequests +* AllowedRequests +* LoggedRequests +* RedirectedRequests +* OWASPRuleRequests +* CustomRuleRequests +* BotRequests -* Hits: the client requests that are served directly from Azure CDN edge servers. Refers to those requests whose values for CacheStatus in raw logs are HIT, PARTIAL_HIT, or REMOTE HIT. + -* Miss: the client requests that are served by Azure CDN edge servers fetching contents from origin. Refers to those requests whose values for the field CacheStatus in raw logs are MISS. +## Key metrics included in all reports -**Cache hit ratio** describes the percentage of cached requests that are served from edge directly. The formula of cache hit ratio is: `(PARTIAL_HIT +REMOTE_HIT+HIT/ (HIT + MISS + PARTIAL_HIT + REMOTE_HIT)*100%`. +The following metrics are used within the reports. -This report takes caching scenarios into consideration and requests that met the following requirements are taken into calculation. +| Metric | Description | +||| +| Data Transferred | Shows data transferred from Azure Front Door edge PoPs to client for the selected time frame, client locations, domains, and protocols. | +| Peak Bandwidth | Peak bandwidth usage in bits per seconds from Azure Front Door edge PoPs to clients for the selected time frame, client locations, domains, and protocols. | +| Total Requests | The number of requests that Azure Front Door edge PoPs responded to clients for the selected time frame, client locations, domains, and protocols. | +| Cache Hit Ratio | The percentage of all the cacheable requests for which Azure Front Door served the contents from its edge caches for the selected time frame, client locations, domains, and protocols. | +| 5XX Error Rate | The percentage of requests for which the HTTP status code to client was a 5XX for the selected time frame, client locations, domains, and protocols. | +| Total Latency | Average latency of all the requests for the selected time frame, client locations, domains, and protocols. The latency for each request is measured as the total time of when the client request gets received by Azure Front Door until the last response byte sent from Azure Front Door to client. | -* The requested content was cached on a Front Door PoP. +## Traffic by domain report -* Partial cached contents for object chunking. +The **traffic by domain** report provides a grid view of all the domains under this Azure Front Door profile. -It excludes all of the following cases: -* Requests that are denied because of Rules Set. +In this report you can view: -* Requests that contain matching Rules Set that has been set to disabled cache. +* Request counts +* Data transferred out from Azure Front Door to client +* Requests with status code (3XX, 4XX and 5XX) of each domain -* Requests that are blocked by WAF. +Domains include endpoint domains and custom domains. -* Origin response headers indicate that they shouldn't be cached. For example, Cache-Control: private, Cache-Control: no-cache, or Pragma: no-cache headers will prevent an asset from being cached. +You can go to other tabs to investigate further or view access log for more information if you find the metrics below your expectation. +## Usage report -## Top URLs +The **usage report** shows the trends of traffic and response status code by various dimensions. -Top URLs allow you to view the amount of traffic incurred over a particular endpoint or custom domain. You'll see data for the most requested 50 assets during any period in the past 90 days. Popular URLs will be displayed with the following values. User can sort URLs by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected. URL refers to the value of RequestUri in access log. +The dimensions included in the usage report are: -* URL, refers to the full path of the requested asset in the format of `http(s)://contoso.com/https://docsupdatetracker.net/index.html/images/example.jpg`. -* Request counts. -* Request % of the total requests served by Azure Front Door. -* Data transferred. -* Data transferred %. -* Cache Hit Ratio % -* Requests with response code as 4XX -* Requests with response code as 5XX +* Data transferred from edge to client and from origin to edge, in a line chart. +* Data transferred from edge to client by protocol, in a line chart. +* Number of requests from edge to clients, in a line chart. +* Number of requests from edge to clients by protocol (HTTP and HTTPS), in a line chart. +* Bandwidth from edge to client, in a line chart. +* Total latency, which measures the total time from the client request received by Azure Front Door until the last response byte sent from Azure Front Door to the client, in a line chart. +* Number of requests from edge to clients by HTTP status code, in a line chart. Every request generates an HTTP status code. HTTP status code appears as the HTTPStatusCode in the raw access log. The status code describes how the Azure Front Door edge PoP handled the request. For example, a 2XX status code indicates that the request was successfully served to a client. While a 4XX status code indicates that an error occurred. +* Number of requests from the edge to clients by HTTP status code, in a line chart. The percentage of requests by HTTP status code is shown in a grid. -> [!NOTE] -> Top URLs may change over time and to get an accurate list of the top 50 URLs, Azure Front Door counts all your URL requests by hour and keep the running total over the course of a day. The URLs at the bottom of the 500 URLs may rise onto or drop off the list over the day, so the total number of these URLs are approximations. -> -> The top 50 URLs may rise and fall in the list, but they rarely disappear from the list, so the numbers for top URLs are usually reliable. When a URL drops off the list and rise up again over a day, the number of request during the period when they are missing from the list is estimated based on the request number of the URL that appear in that period. -> -> The same logic applies to Top User Agent. +## Traffic by location report -## Top Referrers +The **traffic by location** report displays: -Top Referrers allow customers to view the top 50 referrer that originated the most requests to the contents on a particular endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer may come from a search engine or other websites. If a user types a URL (for example, http(s)://contoso.com/https://docsupdatetracker.net/index.html) directly into the address line of a browser, the referrer for the requested is "Empty". Top referrers report includes the following values. You can sort by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected. +* The top 50 countries/regions of visitors that access your assets the most. +* A breakdown of metrics by countries/regions and gives you an overall view of countries/regions where the most traffic gets generated. +* The countries/regions that have higher cache hit ratios, and higher 4XX/5XX error code rates. -* Referrer, the value of Referrer in raw logs -* Request counts -* Request % of total requests served by Azure CDN in the selected time period. -* Data transferred -* Data transferred % -* Cache Hit Ratio % -* Requests with response code as 4XX -* Requests with response code as 5XX +The following items are included in the reports: -## Top User Agent +* A world map view of the top 50 countries/regions by data transferred out or requests of your choice. +* Two line charts showing a trend view of the top five countries/regions by data transferred out and requests of your choice. +* A grid of the top countries/regions with corresponding data transferred out from Azure Front Door to clients, the percentage of data transferred out, the number of requests, the percentage of requests by the country/region, cache hit ratio, 4XX response code counts, and 5XX response code counts. -This report allows you to have graphical and statistics view of the top 50 user agents that were used to request content. For example, -* Mozilla/5.0 (Windows NT 10.0; WOW64) -* AppleWebKit/537.36 (KHTML, like Gecko) -* Chrome/86.0.4240.75 -* Safari/537.36. +## Caching report -A grid displays the request counts, request %, data transferred and data transferred, cache Hit Ratio %, requests with response code as 4XX and requests with response code as 5XX. User Agent refers to the value of UserAgent in access logs. +The **caching report** provides a chart view of cache hits and misses, and the cache hit ratio, based on requests. Understanding how Azure Front Door caches your content helps you to improve your application's performance because cache hits give you the fastest performance. You can optimize data delivery speeds by minimizing cache misses. -## Security Report -This report allows you to have graphical and statistics view of WAF patterns by different dimensions. +The caching report includes: -| Dimensions | Description | -||| -| Overview metrics- Matched WAF rules | Requests that match custom WAF rules, managed WAF rules and bot manager. | -| Overview metrics- Blocked Requests | The percentage of requests that are blocked by WAF rules among all the requests that matched WAF rules. | -| Overview metrics- Matched Managed Rules | Four line-charts trend for requests that are Block, Log, Allow and Redirect. | -| Overview metrics- Matched Custom Rule | Requests that match custom WAF rules. | -| Overview metrics- Matched Bot Rule | Requests that match Bot Manager. | -| WAF request trend by action | Four line-charts trend for requests that are Block, Log, Allow and Redirect. | -| Events by Rule Type | Doughnut chart of the WAF requests distribution by Rule Type, e.g. Bot, custom rules and managed rules. | -| Events by Rule Group | Doughnut chart of the WAF requests distribution by Rule Group. | -| Requests by actions | A table of requests by actions, in descending order. | -| Requests by top Rule IDs | A table of requests by top 50 rule IDs, in descending order. | -| Requests by top countries/regions | A table of requests by top 50 countries/regions, in descending order. | -| Requests by top client IPs | A table of requests by top 50 IPs, in descending order. | -| Requests by top Request URL | A table of requests by top 50 URLs, in descending order. | -| Request by top Hostnames | A table of requests by top 50 hostname, in descending order. | -| Requests by top user agents | A table of requests by top 50 user agents, in descending order. | +* Cache hit and miss count trend, in a line chart. +* Cache hit ratio, in a line chart. -## CSV format +Cache hits/misses describe the request number cache hits and cache misses for client requests. -You can download CSV files for different tabs in reports. This section describes the values in each CSV file. +* Hits: the client requests that are served directly from Azure Front Door edge PoPs. Refers to those requests whose values for CacheStatus in the raw access logs are *HIT*, *PARTIAL_HIT*, or *REMOTE_HIT*. +* Miss: the client requests that are served by Azure Front Door edge POPs fetching contents from origin. Refers to those requests whose values for the field CacheStatus in the raw access raw logs are *MISS*. -### General information about the CSV report +**Cache hit ratio** describes the percentage of cached requests that are served from edge directly. The formula of the cache hit ratio is: `(PARTIAL_HIT +REMOTE_HIT+HIT/ (HIT + MISS + PARTIAL_HIT + REMOTE_HIT)*100%`. -Every CSV report includes some general information and the information is available in all CSV files. with variables based on the report you download. +Requests that meet the following requirements are included in the calculation: +* The requested content was cached on an Azure Front Door PoP. +* Partial cached contents for [object chunking](../front-door-caching.md#delivery-of-large-files). -| Value | Description | -||| -| Report | The name of the report. | -| Domains | The list of the endpoints or custom domains for the report. | -| StartDateUTC | The start of the date range for which you generated the report, in Coordinated Universal Time (UTC) | -| EndDateUTC | The end of the date range for which you generated the report, in Coordinated Universal Time (UTC) | -| GeneratedTimeUTC | The date and time when you generated the report, in Coordinated Universal Time (UTC) | -| Location | The list of the countries/regions where the client requests originated. The value is ALL by default. Not applicable to Security report. | -| Protocol | The protocol of the request, HTTP, or HTTPs. Not applicable to Top URL and Traffic by User Agent in Reports and Security report. | -| Aggregation | The granularity of data aggregation in each row, every 5 minutes, every hour, and every day. Not applicable to Traffic by Domain, Top URL, and Traffic by User Agent in Reports and Security report. | +It excludes all of the following cases: -### Data in Traffic by Domain +* Requests that are denied because of a Rule Set. +* Requests that contain matching Rules Set, which has been set to disable the cache. +* Requests that are blocked by the Azure Front Door WAF. +* Requests when the origin response headers indicate that they shouldn't be cached. For example, requests with `Cache-Control: private`, `Cache-Control: no-cache`, or `Pragma: no-cache` headers prevent the response from being cached. -* Domain -* Total Request -* Cache Hit Ratio -* 3XX Requests -* 4XX Requests -* 5XX Requests -* ByteTransferredFromEdgeToClient +## Top URL report -### Data in Traffic by Location +The **top URL report** allow you to view the amount of traffic incurred through a particular endpoint or custom domain. You'll see data for the most requested 50 assets during any period in the past 90 days. -* Location -* TotalRequests -* Request% -* BytesTransferredFromEdgeToClient -### Data in Usage +Popular URLs will be displayed with the following values: -There are three reports in this CSV file. One for HTTP protocol, one for HTTPS protocol and one for HTTP Status Code. +* URL, which refers to the full path of the requested asset in the format of `http(s)://contoso.com/https://docsupdatetracker.net/index.html/images/example.jpg`. URL refers to the value of the RequestUri field in the raw access log. +* Request counts. +* Request counts as a percentage of the total requests served by Azure Front Door. +* Data transferred. +* Data transferred percentage. +* Cache hit ratio percentage. +* Requests with response codes of 4XX. +* Requests with response codes of 5XX. -Reports for HTTP and HTTPs share the same data set. +User can sort URLs by request count, request count percentage, data transferred, and data transferred percentage. All the metrics are aggregated by hour and might vary based on the timeframe selected. -* Time -* Protocol -* DataTransferred(bytes) -* TotalRequest -* bpsFromEdgeToClient -* 2XXRequest -* 3XXRequest -* 4XXRequest -* 5XXRequest +> [!NOTE] +> Top URLs might change over time. To get an accurate list of the top 50 URLs, Azure Front Door counts all your URL requests by hour and keep the running total over the course of a day. The URLs at the bottom of the 50 URLs may rise onto or drop off the list over the day, so the total number of these URLs are approximations. +> +> The top 50 URLs may rise and fall in the list, but they rarely disappear from the list, so the numbers for top URLs are usually reliable. When a URL drops off the list and rise up again over a day, the number of request during the period when they are missing from the list is estimated based on the request number of the URL that appear in that period. -Report for HTTP Status Code. +## Top referrer report -* Time -* DataTransferred(bytes) -* TotalRequest -* bpsFromEdgeToClient -* 2XXRequest -* 3XXRequest -* 4XXRequest -* 5XXRequest +The **top referrer** report shows you the top 50 referrers to a particular Azure Front Door endpoint or custom domain. You can view data for any period in the past 90 days. A referrer indicates the URL from which a request was generated. Referrer may come from a search engine or other websites. If a user types a URL (for example, `https://contoso.com/https://docsupdatetracker.net/index.html`) directly into the address bar of a browser, the referrer for the requested is *Empty*. -### Data in Caching -* Time -* CacheHitRatio -* HitRequests -* MissRequests +The top referrer report includes the following values. -### Data in Top URL +* Referrer, which is the value of the Referrer field in the raw access log. +* Request counts. +* Request count as a percentage of total requests served by Azure Front Door in the selected time period. +* Data transferred. +* Data transferred percentage. +* Cache hit ratio percentage. +* Requests with response code as 4XX. +* Requests with response code as 5XX. -* URL -* TotalRequests -* Request% -* DataTransferred(bytes) -* DataTransferred% +You can sort by request count, request %, data transferred and data transferred %. All the metrics are aggregated by hour and may vary per the time frame selected. -### Data in User Agent +## Top user agent report -* UserAgent -* TotalRequests -* Request% -* DataTransferred(bytes) -* DataTransferred% +The **top user agent** report shows graphical and statistics views of the top 50 user agents that were used to request content. The following list shows example user agents: +* Mozilla/5.0 (Windows NT 10.0; WOW64) +* AppleWebKit/537.36 (KHTML, like Gecko) +* Chrome/86.0.4240.75 +* Safari/537.36. -### Security Report +A grid displays the request counts, request %, data transferred and data transferred, cache Hit Ratio %, requests with response code as 4XX and requests with response code as 5XX. User Agent refers to the value of UserAgent in access logs. -There are seven tables all with the same fields below. +> [!NOTE] +> Top user agents might change over time. To get an accurate list of the top 50 user agents, Azure Front Door counts all your user agent requests by hour and keep the running total over the course of a day. The user agents at the bottom of the 50 user agents may rise onto or drop off the list over the day, so the total number of these user agents are approximations. +> +> The top 50 user agents may rise and fall in the list, but they rarely disappear from the list, so the numbers for top user agents are usually reliable. When a user agent drops off the list and rise up again over a day, the number of request during the period when they are missing from the list is estimated based on the request number of the user agents that appear in that period. -* BlockedRequests -* AllowedRequests -* LoggedRequests -* RedirectedRequests -* OWASPRuleRequests -* CustomRuleRequests -* BotRequests +## Security report -The seven tables are for time, rule ID, countries/regions, IP address, URL, hostname, user agent. +The **security report** provides graphical and statistics views of WAF activity. ++| Dimensions | Description | +||| +| Overview metrics - Matched WAF rules | Requests that match custom WAF rules, managed WAF rules and bot protection rules. | +| Overview metrics - Blocked Requests | The percentage of requests that are blocked by WAF rules among all the requests that matched WAF rules. | +| Overview metrics - Matched Managed Rules | Requests that match managed WAF rules. | +| Overview metrics - Matched Custom Rule | Requests that match custom WAF rules. | +| Overview metrics - Matched Bot Rule | Requests that match bot protection rules. | +| WAF request trend by action | Four line-charts trend for requests by action. Actions are *Block*, *Log*, *Allow*, and *Redirect*. | +| Events by Rule Type | Doughnut chart of the WAF requests distribution by rule type. Rule types include bot protection rules, custom rules, and managed rules. | +| Events by Rule Group | Doughnut chart of the WAF requests distribution by rule group. | +| Requests by actions | A table of requests by actions, in descending order. | +| Requests by top Rule IDs | A table of requests by top 50 rule IDs, in descending order. | +| Requests by top countries/regions | A table of requests by top 50 countries/regions, in descending order. | +| Requests by top client IPs | A table of requests by top 50 IPs, in descending order. | +| Requests by top Request URL | A table of requests by top 50 URLs, in descending order. | +| Request by top Hostnames | A table of requests by top 50 hostname, in descending order. | +| Requests by top user agents | A table of requests by top 50 user agents, in descending order. | ## Next steps |
frontdoor | Web Application Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/web-application-firewall.md | Azure Web Application Firewall (WAF) on Azure Front Door provides centralized pr ## Policy settings -A Web Application Firewall (WAF) policy allows you to control access to your web applications by using a set of custom and managed rules. You can change the state of the policy or configure a specific mode type for the policy. Depending on policy level settings you can choose to either actively inspect incoming requests, monitor only, or to monitor and take actions against requests that match a rule. For more information, see [WAF policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md). +A Web Application Firewall (WAF) policy allows you to control access to your web applications by using a set of custom and managed rules. You can change the state of the policy or configure a specific mode type for the policy. Depending on policy level settings you can choose to either actively inspect incoming requests, monitor only, or to monitor and take actions against requests that match a rule. You can also configure the WAF to only detect threats without blocking them, which is useful when you first enable the WAF. After evaluating how the WAF works with your application, you can reconfigure the WAF settings and enable the WAF in prevention mode. For more information, see [WAF policy settings](../web-application-firewall/afds/waf-front-door-policy-settings.md). ## Managed rules |
governance | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/overview.md | You can build a flexible structure of management groups and subscriptions to org into a hierarchy for unified policy and access management. The following diagram shows an example of creating a hierarchy for governance using management groups. Diagram of a root management group holding both management groups and subscriptions. Some child management groups hold management groups, some hold subscriptions, and some hold both. One of the examples in the sample hierarchy is four levels of management groups with the child level being all subscriptions. :::image-end::: You can create a hierarchy that applies a policy, for example, which limits VM locations to the-West US region in the management group called "Production". This policy will inherit onto all the Enterprise +West US region in the management group called "Corp". This policy will inherit onto all the Enterprise Agreement (EA) subscriptions that are descendants of that management group and will apply to all VMs under those subscriptions. This security policy cannot be altered by the resource or subscription owner allowing for improved governance. when trying to separate the assignment from its definition. For example, let's look at a small section of a hierarchy for a visual. - The diagram focuses on the root management group with child I T and Marketing management groups. The I T management group has a single child management group named Production while the Marketing management group has two Free Trial child subscriptions. + The diagram focuses on the root management group with child Landing zones and Sandbox management groups. The Landing zones management group has two child management groups named Corp and Online while the Sandbox management group has two child subscriptions. :::image-end::: -Let's say there's a custom role defined on the Marketing management group. That custom role is then -assigned on the two free trial subscriptions. +Let's say there's a custom role defined on the Sandbox management group. That custom role is then +assigned on the two Sandbox subscriptions. -If we try to move one of those subscriptions to be a child of the Production management group, this -move would break the path from subscription role assignment to the Marketing management group role +If we try to move one of those subscriptions to be a child of the Corp management group, this +move would break the path from subscription role assignment to the Sandbox management group role definition. In this scenario, you'll receive an error saying the move isn't allowed since it will break this relationship. There are a couple different options to fix this scenario: MG. - Add the subscription to the role definition's assignable scope. - Change the assignable scope within the role definition. In the above example, you can update the- assignable scopes from Marketing to the root management group so that the definition can be reached by + assignable scopes from Sandbox to the root management group so that the definition can be reached by both branches of the hierarchy. - Create another custom role that is defined in the other branch. This new role requires the role assignment to be changed on the subscription also. |
governance | Remediate Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md | if ($InitiativeRoleDefinitionIds.Count -gt 0) { The new managed identity must complete replication through Azure Active Directory before it can be granted the needed roles. Once replication is complete, the roles specified in the policy definition's **roleDefinitionIds** should be granted to the managed identity. -Access the roles specified in the policy definition using the [az policy definition show](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command, then iterate over each **roleDefinitionId** to create the role assignment using the [az role assignment create](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command. +Access the roles specified in the policy definition using the [az policy definition show](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command, then iterate over each **roleDefinitionId** to create the role assignment using the [az role assignment create](/cli/azure/role/assignment?view=azure-cli-latest#az-role-assignment-create&preserve-view=true) command. |
healthcare-apis | Understand Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/understand-service.md | The MedTech service device message data processing follows these steps and in th :::image type="content" source="media/understand-service/understand-device-message-flow.png" alt-text="Screenshot of a device message as it processed by the MedTech service." lightbox="media/understand-service/understand-device-message-flow.png"::: ## Ingest-Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub (`device message event hub`) and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device messages are processed. +Ingest is the first stage where device messages are received from an [Azure Event Hubs](../../event-hubs/index.yml) event hub and immediately pulled into the MedTech service. The Event Hubs service supports high scale and throughput with the ability to receive and process millions of device messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device messages are processed. The device message event hub uses the MedTech service's [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and [Azure resource-based access control (Azure RBAC)](../../role-based-access-control/overview.md) for secure access to the device message event hub. At this point, the [Device](https://www.hl7.org/fhir/device.html) resource, alon If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of [Resolution Type](deploy-new-config.md#configure-the-destination-tab) set at the time of the MedTech service deployment. When set to `Lookup`, the specific message is ignored, and the pipeline continues to process other incoming device messages. If set to `Create`, the MedTech service creates minimal Device and Patient resources in the FHIR service. > [!NOTE]-> The `Resolution Type` can also be adjusted post deployment of the MedTech service in the event that a different type is later desired. +> The `Resolution Type` can also be adjusted post deployment of the MedTech service if a different `Resolution Type` is later required. -The MedTech service buffers the FHIR Observations resources created during the transformation stage and provides near real-time processing. However, it can potentially take up to five minutes for FHIR Observation resources to be persisted in the FHIR service. +The MedTech service provides near real-time processing and will also attempt to reduce the number of requests made to the FHIR service by grouping requests into batches of 300 [normalized messages](#normalize). If there's a low volume of data, and 300 normalized messages haven't been added to the group, then the corresponding FHIR Observations in that group are persisted to the FHIR service after ~five minutes. This means that when there's fewer than 300 normalized messages to be processed, there may be a delay of ~five minutes before FHIR Observations are created or updated in the FHIR service. ++> [!NOTE] +> When multiple device messages contain data for the same FHIR Observation, have the same timestamp, and are sent within the same device message batch (for example, within the ~five minute window or in groups of 300 normalized messages), only the data corresponding to the latest device message for that FHIR Observation is persisted. +> +> For example: +> +> Device message 1: +> ```json +> {    +> "patientid": "testpatient1",    +> "deviceid": "testdevice1", +> "systolic": "129",    +> "diastolic": "65",    +> "measurementdatetime": "2022-02-15T04:00:00.000Z" +> }  +> ``` +> +> Device message 2: +> ```json +> {    +> "patientid": "testpatient1",    +> "deviceid": "testdevice1",    +> "systolic": "113",    +> "diastolic": "58",    +> "measurementdatetime": "2022-02-15T04:00:00.000Z" +> } +> ``` +> +> Assuming these device messages were ingested within the same ~five minute window or in the same group of 300 normalized messages, and since the `measurementdatetime` is the same for both device messages (indicating these contain data for the same FHIR Observation), only device message 2 is persisted to represent the latest/most recent data. ## Persist Persist is the final stage where the FHIR Observation resources from the transform stage are persisted in the [FHIR service](../fhir/overview.md). If the FHIR Observation resource is new, it's created in the FHIR service. If the FHIR Observation resource already existed, it gets updated in the FHIR service. |
import-export | Storage Import Export Data From Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-from-blobs.md | You must: ## Step 1: Create an export job -# [Portal (Preview)](#tab/azure-portal-preview) +# [Portal](#tab/azure-portal-preview) -Perform the following steps to order an import job in Azure Import/Export via the Preview portal. The Azure Import/Export service in preview will create a job of the type "Data Box." +Perform the following steps to order an import job in Azure Import/Export. The Azure Import/Export service creates a job of the type "Data Box." 1. Use your Microsoft Azure credentials to sign in at this URL: [https://portal.azure.com](https://portal.azure.com). 1. Select **+ Create a resource** and search for *Azure Data Box*. Select **Azure Data Box**. Perform the following steps to order an import job in Azure Import/Export via th 1. Select the **Destination country/region** for the job. 1. Then select **Apply**. - [](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png#lightbox) + [](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png#lightbox) 1. Choose the **Select** button for **Import/Export Job**. Perform the following steps to order an import job in Azure Import/Export via th - Choose to export **All objects** in the storage account. -  +  - Choose **Selected containers and blobs**, and specify containers and blobs to export. You can use more than one of the selection methods. Selecting an **Add** option opens a panel on the right where you can add your selection strings. Perform the following steps to order an import job in Azure Import/Export via th |**Add blobs**|Specify individual blobs to export.<br>Select **Add blobs**. Then specify the relative path to the blob, beginning with the container name. Use *$root* to specify the root container.<br>You must provide the blob paths in valid format, as shown in this screenshot, to avoid errors during processing. For more information, see [Examples of valid blob paths](storage-import-export-determine-drives-for-export.md#examples-of-valid-blob-paths).| |**Add prefixes**|Use a prefix to select a set of similarly named containers or similarly named blobs in a container. The prefix may be the prefix of the container name, the complete container name, or a complete container name followed by the prefix of the blob name. | - :::image type="complex" source="./media/storage-import-export-data-from-blobs/import-export-order-preview-06-b-export-job.png" alt-text="Screenshot showing selected containers and blobs for a new Azure Import/Export export job in the Preview portal."::: + :::image type="complex" source="./media/storage-import-export-data-from-blobs/import-export-order-preview-06-b-export-job.png" alt-text="Screenshot showing selected containers and blobs for a new Azure Import/Export export job in the portal."::: <Blob selections include a container, a blob, and blob prefixes that work like wildcards. The Add Prefixes pane on the right is used to add prefixes that select blobs based on common text in the blob path or name.> :::image-end::: - - Choose **Export from blob list file (XML format)**, and select an XML file that contains a list of paths and prefixes for the blobs to be exported from the storage account. You must construct the XML file and store it in a container for the storage account. The file cannot be empty. + - Choose **Export from blob list file (XML format)**, and select an XML file that contains a list of paths and prefixes for the blobs to be exported from the storage account. You must construct the XML file and store it in a container for the storage account. The file can't be empty. > [!IMPORTANT] > If you use an XML file to select the blobs to export, make sure that the XML contains valid paths and/or prefixes. If the file is invalid or no data matches the paths specified, the order terminates with partial data or no data exported. Perform the following steps to order an import job in Azure Import/Export via th 1. In **Return shipping**: 1. Select a shipping carrier from the drop-down list for **Carrier**. The location of the Microsoft datacenter for the selected region determines which carriers are available.- 1. Enter a **Carrier account number**. The account number for an valid carrier account is required. + 1. Enter a **Carrier account number**. The account number for a valid carrier account is required. 1. In the **Return address** area, use **+ Add Address** to add the address to ship to.  On the **Add Address** blade, you can add an address or use an existing one. When you finish entering address information, select **Add shipping address**. -  +  1. In the **Notification** area, enter email addresses for the people you want to notify of the job's progress. Perform the following steps to order an import job in Azure Import/Export via th 1. Review the job information. Make a note of the job name and the Azure datacenter shipping address to ship disks back to. This information is used later on the shipping label. 1. Select **Create**. -  +  1. After the job is created, you'll see the following message. -  +  You can select **Go to resource** to open the **Overview** of the job. - [](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png#lightbox) ---# [Portal (Classic)](#tab/azure-portal-classic) --Perform the following steps to create an export job in the Azure portal using the classic Azure Import/Export service. --1. Sign in to the [Azure portal](https://portal.azure.com). -2. Search for **import/export jobs**. --  --3. Select **+ Create**. --  --4. In **Basics**: -- 1. Select a subscription. - 1. Select a resource group, or select **Create new** and create a new one. - 1. Enter a descriptive name for the import job. Use the name to track the progress of your jobs. - * The name may contain only lowercase letters, numbers, and hyphens. - * The name must start with a letter, and may not contain spaces. -- 1. Select **Export from Azure**. - 1. Select a **Source Azure region**. - - If the new import/export experience is available in the selected region, you'll see a note inviting you to try the new experience. Select **Try now**, and follow the steps on the **Portal (Preview)** tab of this section to try the new experience with this order. --  -- Select **Next: Job Details >** to proceed. --5. In **Job details**: -- 1. Select the Azure region where your data currently is. - 1. Select the storage account from which you want to export data. Use a storage account close to your location. -- The drop-off location is automatically populated based on the region of the storage account selected. -- 1. Specify the blob data to export from your storage account to your blank drive or drives. Choose one of the three following methods. -- - Choose to **Export all** blob data in the storage account. --  -- - Choose **Selected containers and blobs**, and specify containers and blobs to export. You can use more than one of the selection methods. Selecting an **Add** option opens a panel on the right where you can add your selection strings. -- |Option|Description| - ||--| - |**Add containers**|Export all blobs in a container.<br>Select **Add containers**, and enter each container name.| - |**Add blobs**|Specify individual blobs to export.<br>Select **Add blobs**. Then specify the relative path to the blob, beginning with the container name. Use *$root* to specify the root container.<br>You must provide the blob paths in valid format to avoid errors during processing, as shown in this screenshot. For more information, see [Examples of valid blob paths](storage-import-export-determine-drives-for-export.md#examples-of-valid-blob-paths).| - |**Add prefixes**|Use a prefix to select a set of similarly named containers or similarly named blobs in a container. The prefix may be the prefix of the container name, the complete container name, or a complete container name followed by the prefix of the blob name. | -- :::image type="complex" source="./media/storage-import-export-data-from-blobs/export-from-blob-5.png" alt-text="Screenshot showing selected containers and blobs for a new Azure Import/Export export job."::: - <Blob selections include a container, a blob, and blob prefixes that work like wildcards. The Add Prefixes pane on the right is used to add prefixes that select blobs based on common text in the blob path or name.> -- - Choose **Export from blob list file (XML format)**, and select an XML file that contains a list of paths and prefixes for the blobs to be exported from the storage account. You must construct the XML file and store it in a container for the storage account. The file cannot be empty. -- > [!IMPORTANT] - > If you use an XML file to select the blobs to export, make sure that the XML contains valid paths and/or prefixes. If the file is invalid or no data matches the paths specified, the order terminates with partial data or no data exported. -- To see how to add an XML file to a container, see [Export order using XML file](../databox/data-box-deploy-export-ordered.md#export-order-using-xml-file). --  -- > [!NOTE] - > If a blob to be exported is in use during data copy, the Azure Import/Export service takes a snapshot of the blob and copies the snapshot. -- Select **Next: Shipping >** to proceed. --6. [!INCLUDE [storage-import-export-shipping-step.md](../../includes/storage-import-export-shipping-step.md)] --7. In **Review + create**: -- 1. Review the details of the job. - 1. Make a note of the job name and provided Azure datacenter shipping address for shipping disks to Azure. -- > [!NOTE] - > Always send the disks to the datacenter noted in the Azure portal. If the disks are shipped to the wrong datacenter, the job will not be processed. -- 1. Review the **Terms** for your order for privacy and source data deletion. If you agree to the terms, select the check box beneath the terms. Validation of the order begins. --  -- 8. After validation passes, select **Create**. + [](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png#lightbox) # [Azure CLI](#tab/azure-cli) Install-Module -Name Az.ImportExport ## Step 2: Ship the drives -If you do not know the number of drives you need, see [Determine how many drives you need](storage-import-export-determine-drives-for-export.md#determine-how-many-drives-you-need). If you know the number of drives, proceed to ship the drives. +If you don't know the number of drives you need, see [Determine how many drives you need](storage-import-export-determine-drives-for-export.md#determine-how-many-drives-you-need). If you know the number of drives, proceed to ship the drives. [!INCLUDE [storage-import-export-ship-drives](../../includes/storage-import-export-ship-drives.md)] If you do not know the number of drives you need, see [Determine how many drives When the dashboard reports the job is complete, the disks are shipped to you and the tracking number for the shipment is available in the portal. -1. After you receive the drives with exported data, you need to get the BitLocker keys to unlock the drives. Go to the export job in the Azure portal. Click **Import/Export** tab. -2. Select and click your export job from the list. Go to **Encryption** and copy the keys. +1. After you receive the drives with exported data, you need to get the BitLocker keys to unlock the drives. Go to the export job in the Azure portal. Select **Import/Export** tab. +2. Select your export job from the list. Go to **Encryption** and copy the keys.  Use the following command to unlock the drive: `WAImportExport Unlock /bk:<BitLocker key (base 64 string) copied from Encryption blade in Azure portal> /driveLetter:<Drive letter>` -Here is an example of the sample input. +Here's an example of the sample input. `WAImportExport.exe Unlock /bk:CAAcwBoAG8AdQBsAGQAIABiAGUAIABoAGkAZABkAGUAbgA= /driveLetter:e` |
import-export | Storage Import Export Data To Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-blobs.md | This step generates a journal file. The journal file stores basic information su Perform the following steps to prepare the drives. 1. Connect your disk drives to the Windows system via SATA connectors.-2. Create a single NTFS volume on each drive. Assign a drive letter to the volume. Do not use mountpoints. +2. Create a single NTFS volume on each drive. Assign a drive letter to the volume. Don't use mountpoints. 3. Enable BitLocker encryption on the NTFS volume. If using a Windows Server system, use the instructions in [How to enable BitLocker on Windows Server 2012 R2](https://thesolving.com/storage/how-to-enable-bitlocker-on-windows-server-2012-r2/). 4. Copy data to encrypted volume. Use drag and drop or Robocopy or any such copy tool. A journal (*.jrn*) file is created in the same folder where you run the tool. If the drive is locked and you need to unlock the drive, the steps to unlock may be different depending on your use case. - * If you have added data to a pre-encrypted drive (WAImportExport tool was not used for encryption), use the BitLocker key (a numerical password that you specify) in the popup to unlock the drive. + * If you have added data to a pre-encrypted drive (WAImportExport tool wasn't used for encryption), use the BitLocker key (a numerical password that you specify) in the popup to unlock the drive. * If you have added data to a drive that was encrypted by WAImportExport tool, use the following command to unlock the drive: Perform the following steps to prepare the drives. |/bk: |The BitLocker key for the drive. Its numerical password from output of `manage-bde -protectors -get D:` | |/srcdir: |The drive letter of the disk to be shipped followed by `:\`. For example, `D:\`. | |/dstdir: |The name of the destination container in Azure Storage. |- |/blobtype: |This option specifies the type of blobs you want to import the data to. For block blobs, the blob type is `BlockBlob` and for page blobs, it is `PageBlob`. | - |/skipwrite: | Specifies that there is no new data required to be copied and existing data on the disk is to be prepared. | - |/enablecontentmd5: |The option when enabled, ensures that MD5 is computed and set as `Content-md5` property on each blob. Use this option only if you want to use the `Content-md5` field after the data is uploaded to Azure. <br> This option does not affect the data integrity check (that occurs by default). The setting does increase the time taken to upload data to cloud. | + |/blobtype: |This option specifies the type of blobs you want to import the data to. For block blobs, the blob type is `BlockBlob` and for page blobs, it's `PageBlob`. | + |/skipwrite: | Specifies that there's no new data required to be copied and existing data on the disk is to be prepared. | + |/enablecontentmd5: |The option when enabled, ensures that MD5 is computed and set as `Content-md5` property on each blob. Use this option only if you want to use the `Content-md5` field after the data is uploaded to Azure. <br> This option doesn't affect the data integrity check (that occurs by default). The setting does increase the time taken to upload data to cloud. | > [!NOTE] > - If you import a blob with the same name as an existing blob in the destination container, the imported blob will overwrite the existing blob. In earlier tool versions (before 1.5.0.300), the imported blob was renamed by default, and a \Disposition parameter let you specify whether to rename, overwrite, or disregard the blob in the import. Perform the following steps to prepare the drives. A journal file with the provided name is created for every run of the command line. - Together with the journal file, a `<Journal file name>_DriveInfo_<Drive serial ID>.xml` file is also created in the same folder where the tool resides. The .xml file is used in place of the journal file when creating a job if the journal file is too big. + Together with the journal file, a `<Journal file name>_DriveInfo_<Drive serial ID>.xml` file is also created in the same folder where the tool resides. The .xml file is used in place of the journal file when creating a job if the journal file is too large. > [!IMPORTANT] > * Do not modify the journal files or the data on the disk drives, and don't reformat any disks, after completing disk preparation. Perform the following steps to prepare the drives. ## Step 2: Create an import job -# [Portal (Preview)](#tab/azure-portal-preview) +# [Portal](#tab/azure-portal-preview) [!INCLUDE [storage-import-export-preview-import-steps.md](../../includes/storage-import-export-preview-import-steps.md)] -# [Portal (Classic)](#tab/azure-portal) --- # [Azure CLI](#tab/azure-cli) Use the following steps to create an import job in the Azure CLI. |
import-export | Storage Import Export Data To Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-files.md | -The Import/Export service supports only import of Azure Files into Azure Storage. Exporting Azure Files is not supported. +The Import/Export service supports only import of Azure Files into Azure Storage. Exporting Azure Files isn't supported. In this tutorial, you learn how to: Do the following steps to prepare the drives. 2. Create a single NTFS volume on each drive. Assign a drive letter to the volume. Do not use mountpoints. 3. Modify the *dataset.csv* file in the root folder where the tool is. Depending on whether you want to import a file or folder or both, add entries in the *dataset.csv* file similar to the following examples. - - **To import a file**: In the following example, the data to copy is on the F: drive. Your file *MyFile1.txt* is copied to the root of the *MyAzureFileshare1*. If the *MyAzureFileshare1* does not exist, it's created in the Azure Storage account. Folder structure is maintained. + - **To import a file**: In the following example, the data to copy is on the F: drive. Your file *MyFile1.txt* is copied to the root of the *MyAzureFileshare1*. If the *MyAzureFileshare1* doesn't exist, it's created in the Azure Storage account. Folder structure is maintained. ``` BasePath,DstItemPathOrPrefix,ItemType Do the following steps to prepare the drives. ``` > [!NOTE]- > The /Disposition parameter, which let you choose what to do when you import a file that already exists in earlier versions of the tool, is not supported in Azure Import/Export version 2.2.0.300. In the earlier tool versions, an imported file with the same name as an existing file was renamed by default. + > The /Disposition parameter, which let you choose what to do when you import a file that already exists in earlier versions of the tool, isn't supported in Azure Import/Export version 2.2.0.300. In the earlier tool versions, an imported file with the same name as an existing file was renamed by default. Multiple entries can be made in the same file corresponding to folders or files that are imported. Do the following steps to prepare the drives. This example assumes that two disks are attached and basic NTFS volumes G:\ and H:\ are created. H:\is not encrypted while G: is already encrypted. The tool formats and encrypts the disk that hosts H:\ only (and not G:\). - - **For a disk that is not encrypted**: Specify *Encrypt* to enable BitLocker encryption on the disk. + - **For a disk that isn't encrypted**: Specify *Encrypt* to enable BitLocker encryption on the disk. ``` DriveLetter,FormatOption,SilentOrPromptOnFormat,Encryption,ExistingBitLockerKey For additional samples, go to [Samples for journal files](#samples-for-journal-f ## Step 2: Create an import job -### [Portal (Preview)](#tab/azure-portal-preview) +### [Portal](#tab/azure-portal-preview) [!INCLUDE [storage-import-export-preview-import-steps.md](../../includes/storage-import-export-preview-import-steps.md)] -### [Portal (Classic)](#tab/azure-portal-classic) --- ### [Azure CLI](#tab/azure-cli) Use the following steps to create an import job in the Azure CLI. |
iot-central | Howto Export Data Legacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md | For Event Hubs and Service Bus, IoT Central exports a new message quickly after For Blob storage, messages are batched and exported once per minute. The exported files use the same format as the message files exported by [IoT Hub message routing](../../iot-hub/tutorial-routing.md) to blob storage. > [!NOTE]-> For Blob storage, ensure that your devices are sending messages that have `contentType: application/JSON` and `contentEncoding:utf-8` (or `utf-16`, `utf-32`). See the [IoT Hub documentation](../../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-message-body) for an example. +> For Blob storage, ensure that your devices are sending messages that have `contentType: application/JSON` and `contentEncoding:utf-8` (or `utf-16`, `utf-32`). See the [IoT Hub documentation](../../iot-hub/iot-hub-devguide-routing-query-syntax.md#query-based-on-message-body) for an example. The device that sent the telemetry is represented by the device ID (see the following sections). To get the names of the devices, export device data and correlate each message by using the **connectionDeviceId** that matches the **deviceId** of the device message. |
iot-dps | Tutorial Custom Hsm Enrollment Group X509 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md | In the rest of this section, you'll use your Windows command prompt. 4. Enter the following command to build and run the X.509 device provisioning sample (replace `<id-scope>` with the ID Scope that you copied in step 2. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands. ```cmd- run -- -s <id-scope> -c <your-certificate-folder>\certs\device-01-full-chain.cert.pfx -p 1234 + dotnet run -- -s <id-scope> -c <your-certificate-folder>\certs\device-01-full-chain.cert.pfx -p 1234 ``` The device connects to DPS and is assigned to an IoT hub. Then, the device sends a telemetry message to the IoT hub. You should see output similar to the following: In the rest of this section, you'll use your Windows command prompt. 5. To register your second device, rerun the sample using its full chain certificate. ```cmd- run -- -s <id-scope> -c <your-certificate-folder>\certs\device-02-full-chain.cert.pfx -p 1234 + dotnet run -- -s <id-scope> -c <your-certificate-folder>\certs\device-02-full-chain.cert.pfx -p 1234 ``` ::: zone-end |
iot-edge | How To Configure Multiple Nics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-multiple-nics.md | For more information about networking concepts and configurations, see [Azure Io - Virtual switch different from the default one used during EFLOW installation. For more information on creating a virtual switch, see [Create a virtual switch for Azure IoT Edge for Linux on Windows](./how-to-create-virtual-switch.md). ## Create and assign a virtual switch-During the EFLOW VM deployment, the VM had a switched assigned for all the communications between the Windows host OS and the virtual machine. This will always be the switch used for VM lifecycle management communications, and it's not possible to delete it. +During the EFLOW VM deployment, the VM had a switch assigned for all communications between the Windows host OS and the virtual machine. You always use the switch for VM lifecycle management communications, and it's not possible to delete it. -The following steps in this section show how to assign a network interface to the EFLOW virtual machine. Ensure that the virtual switch being used and the networking configuration aligns with your networking environment. For more information about networking concepts like type of switches, DHCP and DNS, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). +The following steps in this section show how to assign a network interface to the EFLOW virtual machine. Ensure that the virtual switch and the networking configuration align with your networking environment. For more information about networking concepts like type of switches, DHCP and DNS, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md). 1. Open an elevated _PowerShell_ session by starting with **Run as Administrator**. -1. Check the virtual switch to be assigned to the EFLOW VM is available. +1. Check that the virtual switch you assign to the EFLOW VM is available. ```powershell Get-VMSwitch -Name "{switchName}" -SwitchType {switchType} ``` The following steps in this section show how to assign a network interface to th ``` :::image type="content" source="./medilet-add-eflow-network.png" alt-text="EFLOW attach virtual switch"::: -1. Check that the virtual switch was correctly assigned to the EFLOW VM. +1. Check that you correctly assigned the virtual switch to the EFLOW VM. ```powershell Get-EflowNetwork -vSwitchName "{switchName}" ``` For more information about attaching a virtual switch to the EFLOW VM, see [Powe ## Create and assign a network endpoint-Once the virtual switch was successfully assigned to the EFLOW VM, you need to create a networking endpoint assigned to virtual switch to finalize the network interface creation. If you're using Static IP, ensure to use the appropriate parameters: _ip4Address_, _ip4GatewayAddress_ and _ip4PrefixLength_. +Once you successfully assign the virtual switch to the EFLOW VM, create a networking endpoint assigned to virtual switch to finalize the network interface creation. If you're using Static IP, ensure to use the appropriate parameters: _ip4Address_, _ip4GatewayAddress_ and _ip4PrefixLength_. 1. Open an elevated _PowerShell_ session by starting with **Run as Administrator**. 1. Create the EFLOW VM network endpoint - - If you're using DHCP, no Static IP parameters are needed. + - If you're using DHCP, you don't need Static IP parameters. ```powershell Add-EflowVmEndpoint -vSwitchName "{switchName}" -vEndpointName "{EndpointName}" ``` Once the virtual switch was successfully assigned to the EFLOW VM, you need to c :::image type="content" source="./medilet-add-eflow-endpoint.png" alt-text="EFLOW attach network endpoint"::: -1. Check that the network endpoint was correctly created and assigned to the EFLOW VM. You should see the two network interfaces assigned to the virtual machine. +1. Check that you correctly created the network endpoint and assigned it to the EFLOW VM. You should see two network interfaces assigned to the virtual machine. ```powershell Get-EflowVmEndpoint ``` For more information about creating and attaching a network endpoint to the EFLO ## Check the VM network configurations-The final step is to make sure the networking configurations were applied correctly and the EFLOW VM has the new network interface configured. The new interface will show up as _"eth1"_ if it's the first extra interface added to the VM. +The final step is to make sure the networking configurations applied correctly and the EFLOW VM has the new network interface configured. The new interface shows up as _"eth1"_ if it's the first extra interface added to the VM. 1. Open PowerShell in an elevated session. You can do so by opening the **Start** pane on Windows and typing in "PowerShell". Right-click the **Windows PowerShell** app that shows up and select **Run as administrator**. The final step is to make sure the networking configurations were applied correc ifconfig ``` - The default interface **eth0** is the one used for all the VM management. You should see another interface, like **eth1**, which is the new interface that was assigned to the VM. Following the examples above, if you previously assigned a new endpoint with the static IP 192.168.0.103 you should see the interface **eth1** with the _inet addr: 192.168.0.103_. + The default interface **eth0** is the one used for all the VM management. You should see another interface, like **eth1**, which is the new interface you assigned to the VM. Following the examples, if you previously assigned a new endpoint with the static IP 192.168.0.103 you should see the interface **eth1** with the _inet addr: 192.168.0.103_. -  + :::image type="content" source="./medilet-eflow-ifconfig.png" alt-text="Screenshot of EFLOW virtual machine network interfaces."::: ## Next steps-Follow the steps in [How to configure networking for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md) to make sure all the networking configurations were applied correctly. +Follow the steps in [How to configure networking for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md) to make sure you applied all the networking configurations correctly. |
iot-edge | How To Configure Proxy Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md | When you use the **Set modules** wizard to create deployments for IoT Edge devic To configure the IoT Edge agent and IoT Edge hub modules, select **Runtime Settings** on the first step of the wizard. - Add the **https_proxy** environment variable to both the IoT Edge agent and IoT Edge hub module definitions. If you included the **UpstreamProtocol** environment variable in the config file on your IoT Edge device, add that to the IoT Edge agent module definition too. - All other modules that you add to a deployment manifest follow the same pattern. Select **Apply** to save your changes. |
iot-edge | How To Connect Downstream Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md | When a device connects to an IoT Edge gateway, the downstream device is the clie When you use a self-signed root CA certificate for an IoT Edge gateway, it needs to be installed on or provided to all the downstream devices attempting to connect to the gateway. - To learn more about IoT Edge certificates and some production implications, see [IoT Edge certificate usage details](iot-edge-certs.md). This command tests connections over MQTTS (port 8883). If you're using a differe The output of this command may be long, including information about all the certificates in the chain. If your connection is successful, you'll see a line like `Verification: OK` or `Verify return code: 0 (ok)`. - ## Troubleshoot the gateway connection |
iot-edge | How To Continuous Integration Continuous Deployment Classic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md | - In this article, you learn how to use the built-in [Azure IoT Edge tasks](/azure/devops/pipelines/tasks/build/azure-iot-edge) for Azure Pipelines to create build and release pipelines for your IoT Edge solution. Each Azure IoT Edge task added to your pipeline implements one of the following four actions: In this section, you create a new build pipeline. You configure the pipeline to 1. Sign in to your Azure DevOps organization (`https://dev.azure.com/{your organization}`) and open the project that contains your IoT Edge solution repository. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/initial-project.png" alt-text="Screenshot that shows how to open your DevOps project."::: 2. From the left pane menu in your project, select **Pipelines**. Select **Create Pipeline** at the center of the page. Or, if you already have build pipelines, select the **New pipeline** button in the top right. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/add-new-pipeline.png" alt-text="Screenshot that shows how to create a new build pipeline."::: 3. At the bottom of the **Where is your code?** page, select **Use the classic editor**. If you wish to use YAML to create your project's build pipelines, see the [YAML guide](how-to-continuous-integration-continuous-deployment.md). -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/create-without-yaml.png" alt-text="Screenshot that shows how to use the classic editor."::: 4. Follow the prompts to create your pipeline. 1. Provide the source information for your new build pipeline. Select **Azure Repos Git** as the source, then select the project, repository, and branch where your IoT Edge solution code is located. Then, select **Continue**. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/pipeline-source.png" alt-text="Screenshot showing how to select your pipeline source."::: 2. Select **Empty job** instead of a template. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/start-with-empty-build-job.png" alt-text="Screenshot showing how to start with an empty job for your build pipeline."::: 5. Once your pipeline is created, you are taken to the pipeline editor. Here, you can change the pipeline's name, agent pool, and agent specification. In this section, you create a new build pipeline. You configure the pipeline to 11. Open the **Triggers** tab and check the box to **Enable continuous integration**. Make sure the branch containing your code is included. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/configure-trigger.png" alt-text="Screenshot showing how to turn on continuous integration trigger."::: 12. Select **Save** from the **Save & queue** dropdown. |
iot-edge | How To Continuous Integration Continuous Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment.md | - In this article, you learn how to use the built-in [Azure IoT Edge tasks](/azure/devops/pipelines/tasks/build/azure-iot-edge) for Azure Pipelines to create build and release pipelines for your IoT Edge solution. Each Azure IoT Edge task added to your pipeline implements one of the following four actions: In this section, you create a new build pipeline. You configure the pipeline to 1. Sign in to your Azure DevOps organization (`https://dev.azure.com/{your organization}`) and open the project that contains your IoT Edge solution repository. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/initial-project.png" alt-text="Screenshot showing how to open your DevOps project."::: 2. From the left pane menu in your project, select **Pipelines**. Select **Create Pipeline** at the center of the page. Or, if you already have build pipelines, select the **New pipeline** button in the top right. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/add-new-pipeline.png" alt-text="Screenshot showing how to create a new build pipeline using the New pipeline button ."::: 3. On the **Where is your code?** page, select **Azure Repos Git `YAML`**. If you wish to use the classic editor to create your project's build pipelines, see the [classic editor guide](how-to-continuous-integration-continuous-deployment-classic.md). 4. Select the repository you are creating a pipeline for. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/select-repository.png" alt-text="Screenshot showing how to select the repository for your build pipeline."::: 5. On the **Configure your pipeline** page, select **Starter pipeline**. If you have a preexisting Azure Pipelines YAML file you wish to use to create this pipeline, you can select **Existing Azure Pipelines YAML file** and provide the branch and path in the repository to the file. In this section, you create a new build pipeline. You configure the pipeline to Select **Show assistant** to open the **Tasks** palette. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/show-assistant.png" alt-text="Screenshot that shows how to select Show assistant to open Tasks palette."::: 7. To add a task, place your cursor at the end of the YAML or wherever you want the instructions for your task to be added. Search for and select **Azure IoT Edge**. Fill out the task's parameters as follows. Then, select **Add**. In this section, you create a new build pipeline. You configure the pipeline to For more information about this task and its parameters, see [Azure IoT Edge task](/azure/devops/pipelines/tasks/build/azure-iot-edge). -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/add-build-task.png" alt-text="Screenshot of the Use Tasks palette and how to add tasks to your pipeline."::: >[!TIP] > After each task is added, the editor will automatically highlight the added lines. To prevent accidental overwriting, deselect the lines and provide a new space for your next task before adding additional tasks. In this section, you create a new build pipeline. You configure the pipeline to 10. The trigger for continuous integration is enabled by default for your YAML pipeline. If you wish to edit these settings, select your pipeline and click **Edit** in the top right. Select **More actions** next to the **Run** button in the top right and go to **Triggers**. **Continuous integration** shows as enabled under your pipeline's name. If you wish to see the details for the trigger, check the **Override the YAML continuous integration trigger from here** box. -  + :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment/check-trigger-settings.png" alt-text="Screenshot showing how to review your pipeline's trigger settings from the Triggers menu under More actions."::: Continue to the next section to build the release pipeline. |
iot-edge | How To Deploy At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md | After you add a module to a deployment, you can select its name to open the **Up If you're creating a layered deployment, you may be configuring a module that exists in other deployments targeting the same devices. To update the module twin without overwriting other versions, open the **Module Twin Settings** tab. Create a new **Module Twin Property** with a unique name for a subsection within the module twin's desired properties, for example `properties.desired.settings`. If you define properties within just the `properties.desired` field, it will overwrite the desired properties for the module defined in any lower priority deployments. - For more information about module twin configuration in layered deployments, see [Layered deployment](module-deployment-monitoring.md#layered-deployment). When you modify a deployment, the changes immediately replicate to all targeted 1. Select the **Metrics** tab and click the **Edit Metrics** button. Add or modify custom metrics, using the example syntax as a guide. Select **Save**. -  + :::image type="content" source="./media/how-to-deploy-monitor/metric-list.png" alt-text="Screenshot showing how to edit custom metrics in a deployment."::: 1. Select the **Labels** tab and make any desired changes and select **Save**. |
iot-edge | How To Deploy Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md | A deployment manifest is a JSON document that describes which modules to deploy, - **IoT Edge Module Name**: `azureblobstorageoniotedge` - **Image URI**: `mcr.microsoft.com/azure-blob-storage:latest` -  + :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab1.png" alt-text="Screenshot showing the Module Settings tab of the Add I o T Edge Module page. ."::: Don't select **Add** until you've specified values on the **Module Settings**, **Container Create Options**, and **Module Twin Settings** tabs as described in this procedure. A deployment manifest is a JSON document that describes which modules to deploy, 3. Open the **Container Create Options** tab. -  + :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab3.png" alt-text="Screenshot showing the Container Create Options tab of the Add I o T Edge Module page.."::: Copy and paste the following JSON into the box, to provide storage account information and a mount for the storage on your device. A deployment manifest is a JSON document that describes which modules to deploy, 5. On the **Module Twin Settings** tab, copy the following JSON and paste it into the box. -  + :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab4.png" alt-text="Screenshot showing the Module Twin Settings tab of the Add I o T Edge Module page."::: Configure each property with an appropriate value, as indicated by the placeholders. If you are using the IoT Edge simulator, set the values to the related environment variables for these properties as described by [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties). Azure IoT Edge provides templates in Visual Studio Code to help you develop edge 1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. -  + :::image type="content" source="./media/how-to-develop-csharp-module/new-solution.png" alt-text="Screenshot showing how to run the New IoT Edge Solution."::: Follow the prompts in the command palette to create your solution. Azure IoT Edge provides templates in Visual Studio Code to help you develop edge } ``` -  + :::image type="content" source="./media/how-to-deploy-blob/create-options.png" alt-text="Screenshot showing how to update module createOptions - Visual Studio Code ."::: 1. Replace `<your storage account name>` with a name that you can remember. Account names should be 3 to 24 characters long, with lowercase letters and numbers. No spaces. Azure IoT Edge provides templates in Visual Studio Code to help you develop edge } ``` -  + :::image type="content" source="./media/how-to-deploy-blob/devicetocloud-deviceautodelete.png" alt-text="Screenshot showing how to set desired properties for azureblobstorageoniotedge in Visual Studio Code ."::: For information on configuring deviceToCloudUploadProperties and deviceAutoDeleteProperties after your module has been deployed, see [Edit the Module Twin](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Edit-Module-Twin). For more information about container create options, restart policy, and desired status, see [EdgeAgent desired properties](module-edgeagent-edgehub.md#edgeagent-desired-properties). In addition, a blob storage module also requires the HTTPS_PROXY setting in the 1. Add `HTTPS_PROXY` for the **Name** and your proxy URL for the **Value**. -  + :::image type="content" source="./media/how-to-deploy-blob/https-proxy-config.png" alt-text="Screenshot showing the Update I o T Edge Module pane where you can enter the specified values."::: 1. Click **Update**, then **Review + Create**. In addition, a blob storage module also requires the HTTPS_PROXY setting in the 1. Verify the setting by selecting the module from the device details page, and on the lower part of the **IoT Edge Modules Details** page select the **Environment Variables** tab. -  + :::image type="content" source="./media/how-to-deploy-blob/verify-proxy-config.png" alt-text="Screenshot showing the Environment Variables tab."::: ## Next steps |
iot-edge | How To Deploy Modules Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-cli.md | -Once you create IoT Edge modules with your business logic, you want to deploy them to your devices to operate at the edge. If you have multiple modules that work together to collect and process data, you can deploy them all at once and declare the routing rules that connect them. +Once you create Azure IoT Edge modules with your business logic, you want to deploy them to your devices to operate at the edge. If you have multiple modules that work together to collect and process data, you can deploy them all at once. You can also declare the routing rules that connect them. -[Azure CLI](/cli/azure) is an open-source cross platform command-line tool for managing Azure resources such as IoT Edge. It enables you to manage Azure IoT Hub resources, device provisioning service instances, and linked-hubs out of the box. The new IoT extension enriches Azure CLI with features such as device management and full IoT Edge capability. +[Azure CLI](/cli/azure) is an open-source cross platform, command-line tool for managing Azure resources such as IoT Edge. It enables you to manage Azure IoT Hub resources, device provisioning service instances, and linked-hubs out of the box. The new IoT extension enriches Azure CLI with features such as device management and full IoT Edge capability. This article shows how to create a JSON deployment manifest, then use that file to push the deployment to an IoT Edge device. For information about creating a deployment that targets multiple devices based on their shared tags, see [Deploy and monitor IoT Edge modules at scale](how-to-deploy-cli-at-scale.md) This article shows how to create a JSON deployment manifest, then use that file If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md). -* [Azure CLI](/cli/azure/install-azure-cli) in your environment. At a minimum, your Azure CLI version must be 2.0.70 or above. Use `az --version` to validate. This version supports az extension commands and introduces the Knack command framework. +* [Azure CLI](/cli/azure/install-azure-cli) in your environment. At a minimum, your Azure CLI version must be 2.0.70 or higher. Use `az --version` to validate. This version supports az extension commands and introduces the Knack command framework. * The [IoT extension for Azure CLI](https://github.com/Azure/azure-iot-cli-extension). ## Configure a deployment manifest A deployment manifest is a JSON document that describes which modules to deploy, how data flows between the modules, and desired properties of the module twins. For more information about how deployment manifests work and how to create them, see [Understand how IoT Edge modules can be used, configured, and reused](module-composition.md). -To deploy modules using the Azure CLI, save the deployment manifest locally as a .json file. You will use the file path in the next section when you run the command to apply the configuration to your device. +To deploy modules using the Azure CLI, save the deployment manifest locally as a .json file. You use the file path in the next section when you run the command to apply the configuration to your device. Here's a basic deployment manifest with one module as an example: Here's a basic deployment manifest with one module as an example: You deploy modules to your device by applying the deployment manifest that you configured with the module information. -Change directories into the folder where your deployment manifest is saved. If you used one of the Visual Studio Code IoT Edge templates, use the `deployment.json` file in the **config** folder of your solution directory and not the `deployment.template.json` file. +Change directories into the folder where you saved your deployment manifest. If you used one of the Visual Studio Code IoT Edge templates, use the `deployment.json` file in the **config** folder of your solution directory and not the `deployment.template.json` file. Use the following command to apply the configuration to an IoT Edge device: Use the following command to apply the configuration to an IoT Edge device: The device ID parameter is case-sensitive. The content parameter points to the deployment manifest file that you saved. -  ## View modules on your device View the modules on your IoT Edge device: The device ID parameter is case-sensitive. -  ## Next steps |
iot-edge | How To Deploy Modules Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-portal.md | You can quickly deploy a module from the Azure Marketplace onto your device in y 1. On the upper bar, select **Set Modules**. 1. In the **IoT Edge Modules** section, click **Add**, and select **Marketplace Module** from the drop-down menu. - Choose a module from the **IoT Edge Module Marketplace** page. The module you select is automatically configured for your subscription, resource group, and device. It then appears in your list of IoT Edge modules. Some modules may require additional configuration. |
iot-edge | How To Deploy Modules Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-vscode.md | You can use the Azure IoT extensions for Visual Studio Code to perform operation 1. At the bottom of the Explorer, expand the **Azure IoT Hub** section. -  + :::image type="content" source="./media/how-to-deploy-modules-vscode/azure-iot-hub-devices.png" alt-text="Screenshot showing the expanded Azure I o T Hub section."::: 1. Click on the **...** in the **Azure IoT Hub** section header. If you don't see the ellipsis, hover over the header. You deploy modules to your device by applying the deployment manifest that you c 1. Navigate to the deployment manifest JSON file that you want to use, and click **Select Edge Deployment Manifest**. -  + :::image type="content" source="./media/how-to-deploy-modules-vscode/select-deployment-manifest.png" alt-text="Screenshot showing where to select the I o T Edge Deployment Manifest."::: The results of your deployment are printed in the Visual Studio Code output. Successful deployments are applied within a few minutes if the target device is running and connected to the internet. |
iot-edge | How To Deploy Vscode At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-vscode-at-scale.md | After you have configured the deployment manifest and configured tags in the dev 1. Provide values as prompted, starting with the **deployment ID**. -  + :::image type="content" source="./media/how-to-deploy-monitor-vscode/create-deployment-at-scale.png" alt-text="Screenshot showing how to specify a deployment ID."::: Specify values for these parameters: |
iot-edge | How To Edgeagent Direct Method | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-edgeagent-direct-method.md | az iot hub invoke-module-method --method-name 'ping' -n <hub name> -d <device na In the Azure portal, invoke the method with the method name `ping` and an empty JSON payload `{}`. - ## Restart module In the Azure portal, invoke the method with the method name `RestartModule` and } ``` - ## Diagnostic direct methods |
iot-edge | How To Install Iot Edge Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-kubernetes.md | IoT Edge can be installed on Kubernetes by using [KubeVirt](https://www.cncf.io/ ## Architecture -[](./media/how-to-install-iot-edge-kubernetes/iotedge-kubevirt.png#lightbox) | Note | Description | |-|-| |
iot-edge | How To Install Iot Edge Ubuntuvm Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-install-iot-edge-ubuntuvm-bicep.md | You can't deploy a remote Bicep file. Save a copy of the [Bicep file](https://ra The **DNS Name** can also be obtained from the **Overview** section of the newly deployed virtual machine within the Azure portal. - > [!div class="mx-imgBorder"] - > [](./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png) + :::image type="content" source="./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png" alt-text="Screenshot showing the DNS name of the I o T Edge virtual machine." lightbox="./media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-dns-name.png"::: 1. If you want to SSH into this VM after setup, use the associated **DNS Name** with the command: `ssh <adminUsername>@<DNS_Name>` |
iot-edge | How To Manage Device Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md | Using a self-signed certificate authority (CA) certificate as a root of trust wi 1. Apply the configuration. ```bash- sudo iotege config apply + sudo iotedge config apply ``` ### Install root CA to OS certificate store Server certificates may be issued off the Edge CA certificate or through a DPS-c ## Next steps -Installing certificates on an IoT Edge device is a necessary step before deploying your solution in production. Learn more about how to [Prepare to deploy your IoT Edge solution in production](production-checklist.md). +Installing certificates on an IoT Edge device is a necessary step before deploying your solution in production. Learn more about how to [Prepare to deploy your IoT Edge solution in production](production-checklist.md). |
iot-edge | How To Monitor Iot Edge Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-iot-edge-deployments.md | To view the details of a deployment and monitor the devices running it, use the 1. Select the deployment that you want to monitor. 1. On the **Deployment Details** page, scroll down to the bottom section and select the **Target Condition** tab. Select **View** to list the devices that match the target condition. You can change the condition and also the **Priority**. Select **Save** if you made changes. -  + :::image type="content" source="./media/how-to-monitor-iot-edge-deployments/target-devices.png" alt-text="Screenshot showing targeted devices for a deployment."::: 1. Select the **Metrics** tab. If you choose a metric from the **Select Metric** drop-down, a **View** button appears for you to display the results. You can also select **Edit Metrics** to adjust the criteria for any custom metrics that you have defined. Select **Save** if you made changes. -  + :::image type="content" source="./media/how-to-monitor-iot-edge-deployments/deployment-metrics-tab.png" alt-text="Screenshot showing the metrics for a deployment."::: To make changes to your deployment, see [Modify a deployment](how-to-deploy-at-scale.md#modify-a-deployment). |
iot-edge | How To Monitor Module Twins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-monitor-module-twins.md | To view the JSON for the module twin: 1. Select the **Device ID** of the IoT Edge device with the modules you want to monitor. 1. Select the module name from the **Modules** tab and then select **Module Identity Twin** from the upper menu bar. -  + :::image type="content" source="./media/how-to-monitor-module-twins/select-module-twin.png" alt-text="Screenshot showing how to select a module twin to view in the Azure portal ."::: If you see the message "A module identity doesn't exist for this module", this error indicates that the back-end solution is no longer available that originally created the identity. To review and edit a module twin: 1. In the **Explorer**, expand the **Azure IoT Hub**, and then expand the device with the module you want to monitor. 1. Right-click the module and select **Edit Module Twin**. A temporary file of the module twin is downloaded to your computer and displayed in Visual Studio Code. -  + :::image type="content" source="./media/how-to-monitor-module-twins/edit-module-twin-vscode.png" alt-text="Screenshot showing how to get a module twin to edit in Visual Studio Code ."::: If you make changes, select **Update Module Twin** above the code in the editor to save changes to your IoT hub. -  + :::image type="content" source="./media/how-to-monitor-module-twins/update-module-twin-vscode.png" alt-text="Screenshot showing how to update a module twin in Visual Studio Code."::: ### Monitor module twins in Azure CLI |
iot-edge | How To Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-observability.md | In order to go beyond abstract considerations, we'll use a *real-life* scenario ### La Ni├▒a - The La Ni├▒a service measures surface temperature in Pacific Ocean to predict La Ni├▒a winters. There is a number of buoys in the ocean with IoT Edge devices that send the surface temperature to Azure Cloud. The telemetry data with the temperature is pre-processed by a custom module on the IoT Edge device before sending it to the cloud. In the cloud, the data is processed by backend Azure Functions and saved to Azure Blob Storage. The clients of the service (ML inference workflows, decision making systems, various UIs, etc.) can pick up messages with temperature data from the Azure Blob Storage. It's a common practice to measure service level indicators, like the ones we've Let's clarify what components the La Ni├▒a service consists of: - There is an IoT Edge device with `Temperature Sensor` custom module (C#) that generates some temperature value and sends it upstream with a telemetry message. This message is routed to another custom module `Filter` (C#). This module checks the received temperature against a threshold window (0-100 degrees Celsius). If the temperature is within the window, the FilterModule sends the telemetry message to the cloud. In this scenario, we have a fleet of 10 buoys. One of the buoys has been intenti We're going to monitor Service Level Objectives (SLO) and corresponding Service Level Indicators (SLI) with Azure Monitor Workbooks. This scenario deployment includes the *La Nina SLO/SLI* workbook assigned to the IoT Hub. - To achieve the best user experience the workbooks are designed to follow the _glance_ -> _scan_ -> _commit_ concept: To achieve the best user experience the workbooks are designed to follow the _gl At this level, we can see the whole picture at a single glance. The data is aggregated and represented at the fleet level: - From what we can see, the service is not functioning according to the expectations. There is a violation of the *Data Freshness* SLO. Only 90% of the devices send the data frequently, and the service clients expect 95%. All SLO and threshold values are configurable on the workbook settings tab: - #### Scan By clicking on the violated SLO, we can drill down to the *scan* level and see how the devices contribute to the aggregated SLI value. - There is a single device (out of 10) that sends the telemetry data to the cloud "rarely". In our SLO definition, we've stated that "frequently" means at least 10 times per minute. The frequency of this device is way below that threshold. There is a single device (out of 10) that sends the telemetry data to the cloud By clicking on the problematic device, we're drilling down to the *commit* level. This is a curated workbook *Device Details* that comes out of the box with IoT Hub monitoring offering. The *La Nina SLO/SLI* workbook reuses it to bring the details of the specific device performance. - ## Troubleshooting The *commit* level workbook gives a lot of detailed information about the device In this scenario, all parameters of the trouble device look normal and it's not clear why the device sends messages less frequent than expected. This fact is also confirmed by the *messaging* tab of the device-level workbook: - The `Temperature Sensor` (tempSensor) module produced 120 telemetry messages, but only 49 of them went upstream to the cloud. The first step we want to do is to check the logs produced by the `Filter` module. Click the **Troubleshoot live!** button and select the `Filter` module. - Analysis of the module logs doesn't discover the issue. The module receives messages, there are no errors. Everything looks good here. There are two observability instruments serving the deep troubleshooting purpose The La Ni├▒a service uses [OpenTelemetry](https://opentelemetry.io) to produce and collect traces and logs in Azure Monitor. - IoT Edge modules `Temperature Sensor` and `Filter` export the logs and tracing data via OTLP (OpenTelemetry Protocol) to the [OpenTelemetryCollector](https://opentelemetry.io/docs/collector/) module, running on the same edge device. The `OpenTelemetryCollector` module, in its turn, exports logs and traces to Azure Monitor Application Insights service. By default, IoT Edge modules on the devices of the La Ni├▒a service are configur We've analyzed the `Information` level logs of the `Filter` module and realized that we need to dive deeper to locate the cause of the issue. We're going to update properties in the `Temperature Sensor` and `Filter` module twins and increase the `loggingLevel` to `Debug` and change the `traceSampleRatio` from `0` to `1`: - With that in place, we have to restart the `Temperature Sensor` and `Filter` modules: - In a few minutes, the traces and detailed logs will arrive to Azure Monitor from the trouble device. The entire end-to-end message flow from the sensor on the device to the storage in the cloud will be available for monitoring with *application map* in Application Insights: - From this map we can drill down to the traces and we can see that some of them look normal and contain all the steps of the flow, and some of them, are very short, so nothing happens after the `Filter` module. - Let's analyze one of those short traces and find out what was happening in the `Filter` module, and why it didn't send the message upstream to the cloud. Our logs are correlated with the traces, so we can query logs specifying the `TraceId` and `SpanId` to retrieve logs corresponding exactly to this execution instance of the `Filter` module: - The logs show that the module received a message with 70.465-degrees temperature. But the filtering threshold configured on this device is 30 to 70. So the message simply didn't pass the threshold. Apparently, this specific device was configured wrong. This is the cause of the issue we detected while monitoring the La Ni├▒a service performance with the workbook. Let's fix the `Filter` module configuration on this device by updating properties in the module twin. We also want to reduce back the `loggingLevel` to `Information` and `traceSampleRatio` to `0`: - Having done that, we need to restart the module. In a few minutes, the device reports new metric values to Azure Monitor. It reflects in the workbook charts: - We see that the message frequency on the problematic device got back to normal. The overall SLO value will become green again, if nothing else happens, in the configured observation interval: - ## Try the sample |
iot-edge | How To Retrieve Iot Edge Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md | In the Azure portal, invoke the method with the method name `GetModuleLogs` and } ``` - You can also pipe the CLI output to Linux utilities, like [gzip](https://en.wikipedia.org/wiki/Gzip), to process a compressed response. For example: In the Azure portal, invoke the method with the method name `UploadModuleLogs` a } ``` - ## Upload support bundle diagnostics In the Azure portal, invoke the method with the method name `UploadSupportBundle } ``` - ## Get upload request status In the Azure portal, invoke the method with the method name `GetTaskStatus` and } ``` - ## Next steps |
iot-edge | How To Share Windows Folder To Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-share-windows-folder-to-vm.md | If you don't have an EFLOW device ready, you should create one before continuing The Azure IoT Edge for Linux on Windows file and folder sharing mechanism is implemented using [virtiofs](https://virtio-fs.gitlab.io/) technology. *Virtiofs* is a shared file system that lets virtual machines access a directory tree on the host OS. Unlike other approaches, it's designed to offer local file system semantics and performance. *Virtiofs* isn't a network file system repurposed for virtualization. It's designed to take advantage of the locality of virtual machines and the hypervisor. It takes advantage of the virtual machine's co-location with the hypervisor to avoid overhead associated with network file systems. - Only Windows folders can be shared to the EFLOW Linux VM and not the other way. Also, for security reasons, when setting the folder sharing mechanism, the user must provide a _root folder_ and all the shared folders must be under that _root folder_. The following steps provide example EFLOW PowerShell commands to share one or mo 1. Start by creating a new root shared folder. Go to **File Explorer** and choose a location for the *root folder* and create the folder. - For example, create a *root folder* under _C:\Shared_ named **EFLOW-Shared**. + For example, create a *root folder* under _C:\Shared_ named **EFLOW-Shared**. -  + :::image type="content" source="media/how-to-share-windows-folder-to-vm/root-folder.png" alt-text="Screenshot of the Windows root folder."::: 1. Create one or more *shared folders* to be shared with the EFLOW virtual machine. Shared folders should be created under the *root folder* from the previous step. - For example, create two folders one named **Read-Access** and one named **Read-Write-Access**. + For example, create two folders one named **Read-Access** and one named **Read-Write-Access**. -  + :::image type="content" source="media/how-to-share-windows-folder-to-vm/shared-folders.png" alt-text="Screenshot of Windows shared folders."::: 1. Within the _Read-Access_ shared folder, create a sample file that we'll later read inside the EFLOW virtual machine. |
iot-edge | How To Use Create Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-use-create-options.md | If you use the [Azure IoT Edge](https://marketplace.visualstudio.com/items?itemN One tip for writing create options is to use the `docker inspect` command. As part of your development process, run the module locally using `docker run <container name>`. Once you have the module working the way you want it, run `docker inspect <container name>`. This command outputs the module details in JSON format. Find the parameters tha |