Updates from: 07/23/2022 01:11:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Directory Delegated Administration Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delegated-administration-primer.md
Title: Delegated administration in Azure Active Directory description: The relationship between older delegated admin permissions and new granular delegated admin permissions in Azure Active Directory keywords:---+++ Last updated 06/23/2022
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
Title: Delete an Azure AD tenant - Azure Active Directory | Microsoft Docs
description: Explains how to prepare an Azure AD tenant for deletion, including self-service tenants documentationcenter: ''---++ Last updated 06/23/2022-+
active-directory Directory Overview User Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-overview-user-model.md
Title: Users, groups, licensing, and roles in Azure Active Directory description: The relationship between users and licenses assigned, administrator roles, group membership in Azure Active Directory keywords:---+++ Last updated 06/23/2022
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md
Title: Self-service sign-up for email-verified users - Azure AD | Microsoft Docs
description: Use self-service sign-up in an Azure Active Directory (Azure AD) organization documentationcenter: ''--++ editor: ''
Last updated 06/23/2022-+
active-directory Directory Service Limits Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-service-limits-restrictions.md
Title: Service limits and restrictions - Azure Active Directory | Microsoft Docs
description: Usage constraints and other service limits for the Azure Active Directory service documentationcenter: ''--++ editor: ''
Last updated 06/23/2022-+
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
Title: Admin takeover of an unmanaged directory - Azure AD | Microsoft Docs
description: How to take over a DNS domain name in an unmanaged Azure AD organization (shadow tenant). documentationcenter: ''--++ Last updated 06/23/2022-+
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-manage.md
Title: Add and verify custom domain names - Azure Active Directory | Microsoft D
description: Management concepts and how-tos for managing a domain name in Azure Active Directory documentationcenter: ''--++ Last updated 06/23/2022-+
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Title: Change subdomain authentication type using PowerShell and Graph - Azure A
description: Change default subdomain authentication settings inherited from root domain settings in Azure Active Directory. documentationcenter: ''---++ Last updated 06/23/2022-+
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
Title: Assign sensitivity labels to groups - Azure AD | Microsoft Docs
description: Learn how to assign sensitivity labels to groups. See troubleshooting information and view additional available resources. documentationcenter: ''--++ Last updated 06/23/2022-+
active-directory Groups Bulk Download Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download-members.md
Title: Bulk download group membership list - Azure Active Directory portal | Microsoft Docs description: Add users in bulk in the Azure admin center. ---+++ Last updated 06/23/2022
active-directory Groups Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download.md
Title: Download a list of groups in the Azure Active Directory portal | Microsoft Docs description: Download group properties in bulk in the Azure admin center in Azure Active Directory. ---+++ Last updated 03/24/2022
active-directory Groups Bulk Import Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-import-members.md
Title: Bulk upload to add or create members of a group - Azure Active Directory | Microsoft Docs description: Add group members in bulk in the Azure Active Directory admin center. ---+++ Last updated 06/24/2022
active-directory Groups Bulk Remove Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-remove-members.md
Title: Bulk remove group members by uploading a CSV file - Azure Active Directory | Microsoft Docs description: Remove group members in bulk operations in the Azure admin center. ---+++ Last updated 09/22/2021
active-directory Groups Change Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md
Title: Change static group membership to dynamic - Azure AD | Microsoft Docs
description: Learn how to convert existing groups from static to dynamic membership using either Azure AD Admin center or PowerShell cmdlets. documentationcenter: ''--++ Last updated 06/23/2022-+
active-directory Groups Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-create-rule.md
Title: Create or edit a dynamic group and get status - Azure AD | Microsoft Docs
description: How to create or update a group membership rule in the Azure portal, and check its processing status. documentationcenter: ''--++ Last updated 06/23/2022-+
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
Title: Rules for dynamically populated groups membership - Azure AD | Microsoft
description: How to create membership rules to automatically populate groups, and a rule reference. documentationcenter: ''--++ Last updated 06/23/2022-+
active-directory Groups Dynamic Rule More Efficient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-more-efficient.md
Title: Create simpler and faster rules for dynamic groups - Azure AD | Microsoft
description: How to optimize your membership rules to automatically populate groups. documentationcenter: ''--++ Last updated 06/23/2022-+
active-directory Groups Dynamic Rule Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-validation.md
Title: Validate rules for dynamic group membership (preview) - Azure AD | Micros
description: How to test members against a membership rule for a dynamic group in Azure Active Directory. documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Groups Dynamic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-tutorial.md
Title: Add users to a dynamic group - tutorial - Azure AD | Microsoft Docs
description: In this tutorial, you use groups with user membership rules to add or remove users automatically documentationcenter: ''--++ Last updated 06/24/2022-+ #Customer intent: As a new Azure AD identity administrator, I want to automatically add or remove users, so I don't have to manually do it."
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md
Title: Set expiration for Microsoft 365 groups - Azure Active Directory | Micros
description: How to set up expiration for Microsoft 365 groups in Azure Active Directory documentationcenter: ''--++ editor: ''
Last updated 06/24/2022-+
active-directory Groups Members Owners Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-members-owners-search.md
Title: Search and filter groups members and owners (preview) - Azure Active Dire
description: Search and filter groups members and owners in the Azure portal. documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
Title: Enforce group naming policy in Azure Active Directory | Microsoft Docs
description: How to set up naming policy for Microsoft 365 groups in Azure Active Directory documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Groups Quickstart Expiration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-expiration.md
Title: Group expiration policy quickstart - Azure AD | Microsoft Docs
description: Expiration for Microsoft 365 groups - Azure Active Directory documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Groups Quickstart Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-naming-policy.md
Title: Group naming policy quickstart - Azure Active Directory | Microsoft Docs
description: Explains how to add new users or delete existing users in Azure Active Directory documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Groups Restore Deleted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md
Title: Restore a deleted Microsoft 365 group - Azure AD | Microsoft Docs description: How to restore a deleted group, view restorable groups, and permanently delete a group in Azure Active Directory --++ Last updated 06/24/2022-+
active-directory Groups Saasapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-saasapps.md
Title: Use a group to manage access to SaaS apps - Azure AD | Microsoft Docs
description: How to use groups in Azure Active Directory to assign access to SaaS applications that are integrated with Azure Active Directory. documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
Title: Set up self-service group management - Azure Active Directory | Microsoft
description: Create and manage security groups or Microsoft 365 groups in Azure Active Directory and request security group or Microsoft 365 group memberships documentationcenter: ''--++ editor: '' Last updated 06/24/2022-+
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Title: Configure group settings using PowerShell - Azure AD | Microsoft Docs
description: How manage the settings for groups using Azure Active Directory cmdlets documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
Title: PowerShell V2 examples for managing groups - Azure AD | Microsoft Docs
description: This page provides PowerShell examples to help you manage your groups in Azure Active Directory keywords: Azure AD, Azure Active Directory, PowerShell, Groups, Group management --++ Last updated 06/24/2022-+
active-directory Groups Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-troubleshooting.md
Title: Fix problems with dynamic group memberships - Azure AD | Microsoft Docs description: Troubleshooting tips for dynamic group membership in Azure Active Directory --++ Last updated 06/24/2022-+
active-directory Licensing Directory Independence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-directory-independence.md
Title: Characteristics of multi-tenant interaction - Azure AD | Microsoft Docs
description: Understanding the data independence of your Azure Active Directory organizations documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Licensing Group Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md
description: More scenarios for Azure Active Directory group-based licensing
keywords: Azure AD licensing documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Licensing Groups Assign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-assign.md
description: How to assign licenses to users by means of Azure Active Directory
keywords: Azure AD licensing documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Licensing Groups Change Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-change-licenses.md
description: How to migrate users within a group to different service plans usin
keywords: Azure AD licensing documentationcenter: ''--++ editor: '' Last updated 06/24/2022-+
active-directory Licensing Groups Migrate Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-migrate-users.md
description: How to migrate from individual user licenses to group-based licensi
keywords: Azure AD licensing documentationcenter: ''--++ editor: ''
Last updated 06/24/2022-+
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
description: How to identify and resolve license assignment problems when you're
keywords: Azure AD licensing documentationcenter: ''--++ Last updated 06/24/2022-+
active-directory Licensing Ps Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-ps-examples.md
description: PowerShell + Graph examples and scenarios for Azure Active Director
keywords: Azure AD licensing documentationcenter: ''--++ Last updated 12/02/2020-+
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
keywords: Azure Active Directory licensing service plans documentationcenter: '' -+ editor: ''
active-directory Linkedin Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md
Title: Admin consent for LinkedIn account connections - Azure AD | Microsoft Docs description: Explains how to enable or disable LinkedIn integration account connections in Microsoft apps in Azure Active Directory --++ Last updated 06/24/2022-+
active-directory Linkedin User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-user-consent.md
Title: LinkedIn data sharing and consent - Azure Active Directory | Microsoft Docs description: Explains how LinkedIn integration shares data via Microsoft apps in Azure Active Directory --++ Last updated 06/24/2022-+
active-directory Signin Account Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/signin-account-support.md
Title: Does my Azure AD sign-in page accept Microsoft accounts | Microsoft Docs description: How on-screen messaging reflects username lookup during sign-in --++ Last updated 06/24/2022-+
active-directory Signin Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/signin-realm-discovery.md
Title: Username lookup during sign-in - Azure Active Directory | Microsoft Docs description: How on-screen messaging reflects username lookup during sign-in in Azure Active Directory --++ Last updated 06/24/2022-+
active-directory Users Bulk Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-add.md
Title: Bulk create users in the Azure Active Directory portal | Microsoft Docs description: Add users in bulk in the Azure AD admin center in Azure Active Directory ---+++ Last updated 06/24/2022
active-directory Users Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-delete.md
Title: Bulk delete users in the Azure Active Directory portal | Microsoft Docs description: Delete users in bulk in the Azure admin center in Azure Active Directory ---+++ Last updated 06/24/2022
active-directory Users Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-download.md
Title: Download a list of users in the Azure Active Directory portal | Microsoft Docs description: Download user records in bulk in the Azure admin center in Azure Active Directory. ---+++ Last updated 06/24/2022
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
Title: Bulk restore deleted users in the Azure Active Directory portal | Microsoft Docs description: Restore deleted users in bulk in the Azure AD admin center in Azure Active Directory ---+++ Last updated 06/24/2022
active-directory Users Close Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-close-account.md
Title: Close a work or school account in an unmanaged Azure AD organization
description: How to close your work or school account in an unmanaged Azure Active Directory. -+
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
Title: Restrict guest user access permissions - Azure Active Directory | Microsoft Docs description: Restrict guest user access permissions using the Azure portal, PowerShell, or Microsoft Graph in Azure Active Directory ---+++ Last updated 06/24/2022
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
---+++ Last updated 06/24/2022
active-directory Users Search Enhanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-search-enhanced.md
Title: User management enhancements - Azure Active Directory | Microsoft Docs
description: Describes how Azure Active Directory enables user search, filtering, and more information about your users. documentationcenter: ''--++ editor: ''- Last updated 06/24/2022-+
active-directory Users Sharing Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-sharing-accounts.md
Title: Sharing accounts and credentials - Azure Active Directory | Microsoft Doc
description: Describes how Azure Active Directory enables organizations to securely share accounts for on-premises apps and consumer cloud services. documentationcenter: ''--++ editor: ''- Last updated 06/24/2022-+
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md
Non-Windows workstations can be integrated with Azure AD to enhance user experie
* macOS
- * [Set up enrollment for macOS devices - Microsoft Intune](/mem/intune/enrollment/macos-enroll)
+ * Register macOS to Azure AD and [enroll/manage them with MDM solution](/mem/intune/enrollment/macos-enroll)
- * Deploy [Microsoft Enterprise SSO plug-in for Apple devices - Microsoft identity platform | Azure](../develop/apple-sso-plugin.md)
+ * Deploy [Microsoft Enterprise SSO plug-in for Apple devices](../develop/apple-sso-plugin.md)
-* Linux
+ * Plan to deploy [Platform SSO for macOS 13](https://techcommunity.microsoft.com/t5/microsoft-endpoint-manager-blog/microsoft-simplifies-endpoint-manager-enrollment-for-apple/ba-p/3570319)
- * Consider Linux on Azure VM where possible
+* Linux
- * [Sign in to a Linux VM with Azure Active Directory credentials - Azure Virtual Machines](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md)
+ * [Sign in to a Linux VM with Azure Active Directory credentials](../../active-directory/devices/howto-vm-sign-in-azure-ad-linux.md) is available on Linux on Azure VM
### Replace Other Windows versions as Workstation use
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
- # What are the default user permissions in Azure Active Directory?+ In Azure Active Directory (Azure AD), all users are granted a set of default permissions. A user's access consists of the type of user, their [role assignments](active-directory-users-assign-role-azure-portal.md), and their ownership of individual objects. This article describes those default permissions and compares the member and guest user defaults. The default user permissions can be changed only in user settings in Azure AD.
For example, a university has many users in its directory. The admin might not w
You can restrict default permissions for member users in the following ways:
-Permission | Setting explanation
-- |
-**Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals by adding them to the application developer role.
-**Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md).
-**Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md).
-**Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md).
-**Access the Azure AD administration portal** | <p>Setting this option to **No** lets non-administrators use the Azure AD administration portal to read and manage Azure AD resources. **Yes** restricts all non-administrators from accessing any Azure AD data in the administration portal.</p><p>This setting does not restrict access to Azure AD data by using PowerShell or other clients such as Visual Studio. When you set this option to **Yes** to grant a specific non-admin user the ability to use the Azure AD administration portal, assign any administrative role such as the directory reader role.</p><p>The directory reader role allows reading basic directory information. Member users have it by default. Guests and service principals don't.</p><p>This settings blocks non-admin users who are owners of groups or applications from using the Azure portal to manage their owned resources. This setting does not restrict access as long as a user is assigned a custom role (or any role) and is not just a user.</p>
-**Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag does not prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`.
+| Permission | Setting explanation |
+| - | |
+| **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals by adding them to the application developer role. |
+| **Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md). |
+| **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |
+| **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |
+| **Restrict access to Azure AD administration portal** | **What does this switch do?** <br>**No** lets non-administrators browse the Azure AD administration portal. <br>**Yes** Restricts non-administrators from browsing the Azure AD administration portal. Non-administrators who are owners of groups or applications are unable to use the Azure portal to manage their owned resources. </p><p></p><p>**What does it not do?** <br> It does not restrict access to Azure AD data using PowerShell or other clients such as Visual Studio. <br>It does not restrict access as long as a user is assigned a custom role (or any role). <br>It does not restrict access to Entra Portal. </p><p></p><p>**When should I use this switch?** <br>Use this to prevent users from misconfiguring the resources that they own. </p><p></p><p>**When should I not use this switch?** <br>Do not use this switch as a security measure. Instead, create a Conditional Access policy that targets Microsoft Azure Management will block non-administrators access to [Microsoft Azure Management](../conditional-access/concept-conditional-access-cloud-apps.md#microsoft-azure-management). </p><p></p><p> **How do I grant only a specific non-administrator users the ability to use the Azure AD administration portal?** <br> Set this option to **Yes**, then assign them a role like global reader. </p><p></p><p>**Restrict access to the Entra administration portal** <br>A Conditional Access policy that targets Microsoft Azure Management will target access to all Azure management. |
+| **Read other users** | This setting is available in Microsoft Graph and PowerShell only. Setting this flag to `$false` prevents all non-admins from reading user information from the directory. This flag does not prevent reading user information in other Microsoft services like Exchange Online.</p><p>This setting is meant for special circumstances, so we don't recommend setting the flag to `$false`. |
->[!NOTE]
->It's assumed that the average user would only use the portal to access Azure AD, and not use PowerShell or the Azure CLI to access their resources. Currently, restricting access to users' default permissions occurs only when users try to access the directory within the Azure portal.
+> [!NOTE]
+> It's assumed that the average user would only use the portal to access Azure AD, and not use PowerShell or the Azure CLI to access their resources. Currently, restricting access to users' default permissions occurs only when users try to access the directory within the Azure portal.
## Restrict guest users' default permissions
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
Azure AD customers can now easily design and issue verifiable credentials. Verif
**Service category:** User Authentication **Product capability:** Authentications (Logins)
-As a security improvement, the [device code flow](../develop/v2-oauth2-device-code.md) has been updated to include an another prompt, which validates that the user is signing into the app they expect. The rollout is planned to start in June and expected to be complete by June 30.
+As a security improvement, the [device code flow](../develop/v2-oauth2-device-code.md) has been updated to include another prompt, which validates that the user is signing into the app they expect. The rollout is planned to start in June and expected to be complete by June 30.
To help prevent phishing attacks where an attacker tricks the user into signing into a malicious application, the following prompt is being added: "Are you trying to sign in to [application display name]?". All users will see this prompt while signing in using the device code flow. As a security measure, it cannot be removed or bypassed. [Learn more](../develop/reference-breaking-changes.md#the-device-code-flow-ux-will-now-include-an-app-confirmation-prompt).
For more information, please see [User management enhancements (preview) in Azur
**Service category:** Enterprise Apps **Product capability:** SSO
-You can add free text notes to Enterprise applications. You can add any relevant information that will help you manager applications under Enterprise applications. For more information, see [Quickstart: Configure properties for an application in your Azure Active Directory (Azure AD) tenant](../manage-apps/add-application-portal-configure.md).
+You can add free text notes to Enterprise applications. You can add any relevant information that will help you manage applications under Enterprise applications. For more information, see [Quickstart: Configure properties for an application in your Azure Active Directory (Azure AD) tenant](../manage-apps/add-application-portal-configure.md).
For listing your application in the Azure AD app gallery, please read the detail
External Identities API connectors enable you to leverage web APIs to integrate self-service sign-up with external cloud systems. This means you can now invoke web APIs as specific steps in a sign-up flow to trigger cloud-based custom workflows. For example, you can use API connectors to: -- Integrate with a custom approval workflows.
+- Integrate with custom approval workflows
- Perform identity proofing - Validate user input data - Overwrite user attributes
This bug fix will be rolled out gradually over approximately 2 months.
On 1 June 2018, the official Azure Active Directory (Azure AD) Authority for Azure Government changed from https://login-us.microsoftonline.com to https://login.microsoftonline.us. If you own an application within an Azure Government tenant, you must update your application to sign users in on the .us endpoint.
-Starting May 5th, Azure AD will begin enforcing the endpoint change, blocking Azure Government users from signing into apps hosted in Azure Government tenants using the public endpoint (microsoftonline.com). affected apps will begin seeing an error AADSTS900439 - USGClientNotSupportedOnPublicEndpoint.
+Starting May 5th, Azure AD will begin enforcing the endpoint change, blocking Azure Government users from signing into apps hosted in Azure Government tenants using the public endpoint (microsoftonline.com). Affected apps will begin seeing an error AADSTS900439 - USGClientNotSupportedOnPublicEndpoint.
There will be a gradual rollout of this change with enforcement expected to be complete across all apps June 2020. For more details, please see the [Azure Government blog post](https://devblogs.microsoft.com/azuregov/azure-government-aad-authority-endpoint-update/).
For listing your application in the Azure AD app gallery, please read the detail
**Service category:** Conditional Access **Product capability:** Identity Security & Protection
-[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their affect before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only modeΓÇöover 26M users are already in scope of a report-only policy. With the announcement today, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the affect of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy) as well.
+[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their effect before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only modeΓÇöover 26M users are already in scope of a report-only policy. With the announcement today, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the effect of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy) as well.
We're expanding B2B invitation capability to allow existing internal accounts to
**Product capability:** Identity Security & Protection
-[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their affect before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only mode, with over 26M users already in scope of a report-only policy. With this announcement, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the affect of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can also [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy).
+[Report-only mode for Azure AD Conditional Access](../conditional-access/concept-conditional-access-report-only.md) lets you evaluate the result of a policy without enforcing access controls. You can test report-only policies across your organization and understand their effect before enabling them, making deployment safer and easier. Over the past few months, weΓÇÖve seen strong adoption of report-only mode, with over 26M users already in scope of a report-only policy. With this announcement, new Azure AD Conditional Access policies will be created in report-only mode by default. This means you can monitor the effect of your policies from the moment theyΓÇÖre created. And for those of you who use the MS Graph APIs, you can also [manage report-only policies programmatically](/graph/api/resources/conditionalaccesspolicy).
For more information about the new cookies, see [Cookie settings for accessing o
In January 2019, we've added these 35 new apps with Federation support to the app gallery:
-[Firstbird](../saas-apps/firstbird-tutorial.md), [Folloze](../saas-apps/folloze-tutorial.md), [Talent Palette](../saas-apps/talent-palette-tutorial.md), [Infor CloudSuite](../saas-apps/infor-cloud-suite-tutorial.md), [Cisco Umbrella](../saas-apps/cisco-umbrella-tutorial.md), [Zscaler Internet Access Administrator](../saas-apps/zscaler-internet-access-administrator-tutorial.md), [Expiration Reminder](../saas-apps/expiration-reminder-tutorial.md), [InstaVR Viewer](../saas-apps/instavr-viewer-tutorial.md), [CorpTax](../saas-apps/corptax-tutorial.md), [Verb](https://app.verb.net/login), [OpenLattice](https://openlattice.com/#/), [TheOrgWiki](https://www.theorgwiki.com/signup), [Pavaso Digital Close](../saas-apps/pavaso-digital-close-tutorial.md), [GoodPractice Toolkit](../saas-apps/goodpractice-toolkit-tutorial.md), [Cloud Service PICCO](../saas-apps/cloud-service-picco-tutorial.md), [AuditBoard](../saas-apps/auditboard-tutorial.md), [iProva](../saas-apps/iprova-tutorial.md), [Workable](../saas-apps/workable-tutorial.md), [CallPlease](https://webapp.callplease.com/create-account/create-account.html), [GTNexus SSO System](../saas-apps/gtnexus-sso-module-tutorial.md), [CBRE ServiceInsight](../saas-apps/cbre-serviceinsight-tutorial.md), [Deskradar](../saas-apps/deskradar-tutorial.md), [Coralogixv](../saas-apps/coralogix-tutorial.md), [Signagelive](../saas-apps/signagelive-tutorial.md), [ARES for Enterprise](../saas-apps/ares-for-enterprise-tutorial.md), [K2 for Office 365](https://www.k2.com/O365), [Xledger](https://www.xledger.net/), [IDID Manager](../saas-apps/idid-manager-tutorial.md), [HighGear](../saas-apps/highgear-tutorial.md), [Visitly](../saas-apps/visitly-tutorial.md), [Korn Ferry ALP](../saas-apps/korn-ferry-alp-tutorial.md), [Acadia](../saas-apps/acadia-tutorial.md), [Adoddle cSaas Platform](../saas-apps/adoddle-csaas-platform-tutorial.md)
+[Firstbird](../saas-apps/firstbird-tutorial.md), [Folloze](../saas-apps/folloze-tutorial.md), [Talent Palette](../saas-apps/talent-palette-tutorial.md), [Infor CloudSuite](../saas-apps/infor-cloud-suite-tutorial.md), [Cisco Umbrella](../saas-apps/cisco-umbrella-tutorial.md), [Zscaler Internet Access Administrator](../saas-apps/zscaler-internet-access-administrator-tutorial.md), [Expiration Reminder](../saas-apps/expiration-reminder-tutorial.md), [InstaVR Viewer](../saas-apps/instavr-viewer-tutorial.md), [CorpTax](../saas-apps/corptax-tutorial.md), [Verb](https://app.verb.net/login), [OpenLattice](https://openlattice.com/#/), [TheOrgWiki](https://www.theorgwiki.com/signup), [Pavaso Digital Close](../saas-apps/pavaso-digital-close-tutorial.md), [GoodPractice Toolkit](../saas-apps/goodpractice-toolkit-tutorial.md), [Cloud Service PICCO](../saas-apps/cloud-service-picco-tutorial.md), [AuditBoard](../saas-apps/auditboard-tutorial.md), [Zeyna](../saas-apps/zenya-tutorial.md), [Workable](../saas-apps/workable-tutorial.md), [CallPlease](https://webapp.callplease.com/create-account/create-account.html), [GTNexus SSO System](../saas-apps/gtnexus-sso-module-tutorial.md), [CBRE ServiceInsight](../saas-apps/cbre-serviceinsight-tutorial.md), [Deskradar](../saas-apps/deskradar-tutorial.md), [Coralogixv](../saas-apps/coralogix-tutorial.md), [Signagelive](../saas-apps/signagelive-tutorial.md), [ARES for Enterprise](../saas-apps/ares-for-enterprise-tutorial.md), [K2 for Office 365](https://www.k2.com/O365), [Xledger](https://www.xledger.net/), [IDID Manager](../saas-apps/idid-manager-tutorial.md), [HighGear](../saas-apps/highgear-tutorial.md), [Visitly](../saas-apps/visitly-tutorial.md), [Korn Ferry ALP](../saas-apps/korn-ferry-alp-tutorial.md), [Acadia](../saas-apps/acadia-tutorial.md), [Adoddle cSaas Platform](../saas-apps/adoddle-csaas-platform-tutorial.md)
For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Under **Admin consent requests**, select **Yes** for **Users can request admin
> [!NOTE] > You can add or remove reviewers for this workflow by modifying the **Select admin consent requests reviewers** list. A current limitation of this feature is that a reviewer can retain the ability to review requests that were made while they were designated as a reviewer.
+## Configure the admin consent workflow using Microsoft Graph
+
+To configure the admin consent workflow programmatically, use the [Update adminConsentRequestPolicy](/graph/api/adminconsentrequestpolicy-update) API in Microsoft Graph.
+ ## Next steps [Grant tenant-wide admin consent to an application](grant-admin-consent.md)
active-directory Datawiza Azure Ad Sso Oracle Jde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-jde.md
+
+ Title: Configure Azure AD Multi-Factor Authentication and SSO for Oracle JD Edwards applications using Datawiza Access Broker
+description: Enable Azure Active Directory Multi-Factor Authentication and SSO for Oracle JD Edwards application using Datawiza Access Broker
+++++++ Last updated : 7/20/2022++++
+# Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards
+
+This tutorial shows how to enable Azure Active Directory (Azure AD) single sign-on (SSO) and Azure AD Multi-Factor Authentication for an Oracle JD Edwards (JDE) application using Datawiza Access Broker (DAB).
+
+Benefits of integrating applications with Azure AD using DAB include:
+
+- [Proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) through [Azure AD SSO](https://azure.microsoft.com/solutions/active-directory-sso/OCID=AIDcmm5edswduu_SEM_e13a1a1787ce1700761a78c235ae5906:G:s&ef_id=e13a1a1787ce1700761a78c235ae5906:G:s&msclkid=e13a1a1787ce1700761a78c235ae5906#features), [Azure AD Multi-Factor Authentication](https://docs.microsoft.com/azure/active-directory/authentication/concept-mfa-howitworks) and
+ [Conditional Access](https://docs.microsoft.com/azure/active-directory/conditional-access/overview).
+
+- [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/). Use of web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, Oracle Peoplesoft, and home-grown apps.
+
+- Use the [Datawiza Cloud Management Console](https://console.datawiza.com), to manage access to applications in public clouds and on-premises.
+
+## Scenario description
+
+This scenario focuses on Oracle JDE application integration using HTTP authorization headers to manage access to protected content.
+
+In legacy applications, due to the absence of modern protocol support, a direct integration with Azure AD SSO is difficult. Datawiza Access Broker (DAB) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning. DAB lowers integration overhead, saves engineering time, and improves application security.
+
+## Scenario architecture
+
+The scenario solution has the following components:
+
+- **Azure AD**: The Microsoft cloud-based identity and access management service, which helps users sign in and access external and internal resources.
+
+- **Oracle JDE application**: Legacy application protected by Azure AD.
+
+- **Datawiza Access Broker (DAB)**: A lightweight container-based reverse-proxy that implements OpenID Connect (OIDC), OAuth, or Security Assertion Markup Language (SAML) for user sign-in flow. It transparently passes identity to applications through HTTP headers.
+
+- **Datawiza Cloud Management Console (DCMC)**: A centralized console to manage DAB. DCMC has UI and RESTful APIs for administrators to configure Datawiza Access Broker and access control policies.
+
+Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication
+architecture](https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
+
+## Prerequisites
+
+Ensure the following prerequisites are met.
+
+- An Azure subscription. If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free)
+
+- An Azure AD tenant linked to the Azure subscription.
+ - See, [Quickstart: Create a new tenant in Azure Active Directory.](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
+
+- Docker and Docker Compose
+
+ - Go to docs.docker.com to [Get Docker](https://docs.docker.com/get-docker) and [Install Docker Compose](https://docs.docker.com/compose/install).
+
+- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.
+
+ - See, [Azure AD Connect sync: Understand and customize synchronization](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-whatis).
+
+- An account with Azure AD and the Application administrator role
+
+ - See, [Azure AD built-in roles, all roles](https://docs.microsoft.com/azure/active-directory/roles/permissions-reference#all-roles).
+
+- An Oracle JDE environment
+
+- (Optional) An SSL web certificate to publish services over HTTPS. You can also use default Datawiza self-signed certs for testing.
+
+## Getting started with DAB
+
+To integrate Oracle JDE with Azure AD:
+
+1. Sign in to [Datawiza Cloud Management Console.](https://console.datawiza.com/)
+
+2. The Welcome page appears.
+
+3. Select the orange **Getting started** button.
+
+ ![Screenshot that shows the getting started page.](media/datawiza-azure-ad-sso-oracle-jde/getting-started.png)
++
+4. In the **Name** and **Description** fields, enter the relevant information.
+
+5. Select **Next**.
+
+ ![Screenshot that shows the name and description fields.](media/datawiza-azure-ad-sso-oracle-jde/name-description-field.png)
++
+6. On the **Add Application** dialog, use the following values:
+
+ | Property| Value|
+ |:--|:-|
+ | Platform | Web |
+ | App Name | Enter a unique application name.|
+ | Public Domain | For example: https:/jde-external.example.com. <br>For testing, you can use localhost DNS. If you aren't deploying DAB behind a load balancer, use the **Public Domain** port. |
+ | Listen Port | The port that DAB listens on.|
+ | Upstream Servers | The Oracle JDE implementation URL and port to be protected.|
+
+7. Select **Next**.
+
+ ![Screenshot that shows how to add application.](media/datawiza-azure-ad-sso-oracle-jde/add-application.png)
++
+8. On the **Configure IdP** dialog, enter the relevant information.
+
+ >[!Note]
+ >DCMC has [one-click integration](https://docs.datawiza.com/tutorial/web-app-azure-one-click.html) to help complete Azure AD configuration. DCMC calls the Graph API to create an application registration on your behalf in your Azure AD tenant.
+
+9. Select **Create**.
+
+ ![Screenshot that shows how to create I d P.](media/datawiza-azure-ad-sso-oracle-jde/configure-idp.png)
++
+10. The DAB deployment page appears.
+
+11. Make a note of the deployment Docker Compose file. The file includes the DAB image, also the Provisioning Key and Provision Secret, which pulls the latest configuration and policies from DCMC.
+
+ ![Screenshot that shows the docker compose file value.](media/datawiza-azure-ad-sso-oracle-jde/provision.png)
++
+## SSO and HTTP headers
+
+DAB gets user attributes from IdP and passes them to the upstream application with a header or cookie.
+
+For the Oracle JDE application to recognize the user correctly, there's another configuration step. Using a certain name, it instructs DAB to pass the values from the IdP to the application through the HTTP header.
+
+1. In Oracle JDE, from the left navigation, select **Applications**.
+
+2. Select the **Attribute Pass** subtab.
+
+3. Use the following values.
+
+ | Property| Value |
+ |:--|:-|
+ | Field | Email |
+ | Expected | JDE_SSO_UID |
+ | Type | Header |
+
+ ![Screenshot that shows the attributes that need to be passed for the Oracle JDE application.](media/datawiza-azure-ad-sso-oracle-jde/add-new-attribute.png)
++
+ >[!Note]
+ >This configuration uses the Azure AD user principal name as the sign in username used by Oracle JDE. To use another user identity, go to the **Mappings** tab.
+
+ ![Screenshot that shows the user principal name field as the username.](media/datawiza-azure-ad-sso-oracle-jde/user-principal-name-mapping.png)
++
+4. Select the **Advanced** tab.
+
+ ![Screenshot that shows the advanced fields.](media/datawiza-azure-ad-sso-oracle-jde/advanced-attributes.png)
++
+ ![Screenshot that shows the new attribute.](media/datawiza-azure-ad-sso-oracle-jde/add-new-attribute.png)
++
+5. Select **Enable SSL**.
+
+6. From the **Cert Type** dropdown, select a type.
+
+ ![Screenshot that shows the cert type dropdown.](media/datawiza-azure-ad-sso-oracle-jde/cert-type.png)
++
+7. For testing purposes, we'll be providing a self-signed certificate.
+
+ ![Screenshot that shows the enable SSL menu.](media/datawiza-azure-ad-sso-oracle-jde/enable-ssl.png)
++
+ >[!NOTE]
+ >You have the option to upload a certificate from a file.
+
+ ![Screenshot that shows uploading cert from a file option.](media/datawiza-azure-ad-sso-oracle-jde/upload-cert.png)
++
+8. Select **Save**.
+
+## Enable Azure AD Multi-Factor Authentication
+
+To provide an extra level of security for sign-ins, enforce multifactor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure portal](https://docs.microsoft.com/azure/active-directory/authentication/tutorial-enable-azure-mfa).
+
+1. Sign in to the Azure portal as a **Global Administrator**.
+
+2. Select **Azure Active Directory** > **Manage** > **Properties**.
+
+3. Under **Properties**, select **Manage security defaults**.
+
+4. Under **Enable Security defaults**, select **Yes** and then **Save**.
+
+## Enable SSO in the Oracle JDE EnterpriseOne Console
+
+To enable SSO in the Oracle JDE environment:
+
+1. Sign in to the Oracle JDE EnterpriseOne Server Manager Management Console as an **Administrator**.
+
+2. In **Select Instance**, select the option above **EnterpriseOne HTML Server**.
+
+3. In the **Configuration** tile, select **View as Advanced**, and then select **Security**.
+
+4. Select the **Enable Oracle Access Manager** checkbox.
+
+5. In the **Oracle Access Manager Sign-Off URL** field, enter **datawiza/ab-logout**.
+
+6. In the **Security Server Configuration** section, select **Apply**.
+
+7. Select **Stop** to confirm you want to stop the managed instance.
+
+ >[!NOTE]
+ >If a message shows the web server configuration (jas.ini) is out-of-date, select **Synchronize Configuration**.
+
+8. Select **Start** to confirm you want to start the managed instance.
+
+## Test an Oracle JDE-based application
+
+Testing validates the application behaves as expected for URIs. To test an Oracle JDE application, you validate application headers, policy, and overall testing. If needed, use header and policy simulation to validate header fields and policy execution.
+
+To confirm Oracle JDE application access occurs correctly, a prompt appears to use an Azure AD account for sign-in. Credentials are checked and the Oracle JDE appears.
+
+## Next steps
+
+- [Watch the video - Enable SSO/MFA for Oracle JDE with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90).
+
+- [Configure Datawiza and Azure AD for secure hybrid access](https://docs.microsoft.com/azure/active-directory/manage-apps/datawiza-with-azure-ad)
+
+- [Configure Datawiza with Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/partner-datawiza)
+
+- [Datawiza documentation](https://docs.datawiza.com/)
active-directory Datawiza With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-with-azure-ad.md
DAB evaluates policies, calculates headers, and sends you to the upstream applic
- [Configure Datawiza with Azure AD B2C](../../active-directory-b2c/partner-datawiza.md)
+- [Configure Azure AD Multi-Factor Authentication and SSO for Oracle JDE applications using DAB](datawiza-azure-ad-sso-oracle-jde.md)
+ - [Datawiza documentation](https://docs.datawiza.com)
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Last updated 10/23/2021 -
+zone_pivot_groups: enterprise-apps-minus-graph
#customer intent: As an admin, I want to review permissions granted to applications so that I can restrict suspicious or over privileged applications.
To review permissions granted to applications, you need:
- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator. - A Service principal owner who isn't an administrator is able to invalidate refresh tokens.
+## Review application permissions
-You can access the Azure AD portal to get contextual PowerShell scripts to perform the actions.
-## Review application permissions
+You can access the Azure AD portal to get contextual PowerShell scripts to perform the actions.
To review application permissions:
To review application permissions:
Each option generates PowerShell scripts that enable you to control user access to the application and to review permissions granted to the application. For information about how to control user access to an application, see [How to remove a user's access to an application](methods-for-removing-user-access.md)
-## Revoke permissions using PowerShell commands
-Using the following PowerShell script revokes all permissions granted to this application.
+
+Using the following Azure AD PowerShell script revokes all permissions granted to an application.
```powershell Connect-AzureAD
$spApplicationPermissions | ForEach-Object {
} ```
-> [!NOTE]
-> Revoking the current granted permission won't stop users from re-consenting to the application. If you want to block users from consenting, read [Configure how users consent to applications](configure-user-consent.md).
- ## Invalidate the refresh tokens ```powershell
$assignments | ForEach-Object {
Revoke-AzureADUserAllRefreshToken -ObjectId $_.PrincipalId } ```+
+Using the following Microsoft Graph PowerShell script revokes all permissions granted to an application.
+
+```powershell
+Connect-MgGraph
+
+# Get Service Principal using objectId
+$sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
+
+Example: Get-MgServicePrincipal -ServicePrincipalId '22c1770d-30df-49e7-a763-f39d2ef9b369'
+
+# Get all application permissions for the service principal
+$spOAuth2PermissionsGrants= Get-MgOauth2PermissionGrant -All| Where-Object { $_.clientId -eq $sp.Id }
+
+# Remove all delegated permissions
+$spOauth2PermissionsGrants |ForEach-Object {
+ Remove-MgOauth2PermissionGrant -OAuth2PermissionGrantId $_.Id
+ }
+```
+
+## Invalidate the refresh tokens
+
+```powershell
+Connect-MgGraph
+
+# Get Service Principal using objectId
+$sp = Get-MgServicePrincipal -ServicePrincipalID "$ServicePrincipalID"
+
+Example: Get-MgServicePrincipal -ServicePrincipalId '22c1770d-30df-49e7-a763-f39d2ef9b369'
+
+# Get Azure AD App role assignments using objectID of the Service Principal
+$spApplicationPermissions = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalID $sp.Id -All | Where-Object { $_.PrincipalType -eq "ServicePrincipal" }
+
+# Revoke refresh token for all users assigned to the application
+ $spApplicationPermissions | ForEach-Object {
+ Remove-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $_.PrincipalId -AppRoleAssignmentId $_.Id
+ }
+```
++
+> [!NOTE]
+> Revoking the current granted permission won't stop users from re-consenting to the application. If you want to block users from consenting, read [Configure how users consent to applications](configure-user-consent.md).
## Next steps
active-directory Tutorial Linux Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-cosmos-db.md
The CURL response gives you the list of Keys. For example, if you get the read-
"secondaryReadonlyMasterKey":"38v5ns...7bA=="} ```
-Now that you have the access key for the Cosmos DB account you can pass it to a Cosmos DB SDK and make calls to access the account. For a quick example, you can pass the access key to the Azure CLI. You can get the `<COSMOS DB CONNECTION URL>` from the **Overview** tab on the Cosmos DB account blade in the Azure portal. Replace the `<ACCESS KEY>` with the value you obtained above:
-
-```azurecli-interactive
-az cosmosdb collection show -c <COLLECTION ID> -d <DATABASE ID> --url-connection "<COSMOS DB CONNECTION URL>" --key <ACCESS KEY>
-```
-
-This CLI command returns details about the collection:
-
-```output
-{
- "collection": {
- "_conflicts": "conflicts/",
- "_docs": "docs/",
- "_etag": "\"00006700-0000-0000-0000-5a8271e90000\"",
- "_rid": "Es5SAM2FDwA=",
- "_self": "dbs/Es5SAA==/colls/Es5SAM2FDwA=/",
- "_sprocs": "sprocs/",
- "_triggers": "triggers/",
- "_ts": 1518498281,
- "_udfs": "udfs/",
- "id": "Test",
- "indexingPolicy": {
- "automatic": true,
- "excludedPaths": [],
- "includedPaths": [
- {
- "indexes": [
- {
- "dataType": "Number",
- "kind": "Range",
- "precision": -1
- },
- {
- "dataType": "String",
- "kind": "Range",
- "precision": -1
- },
- {
- "dataType": "Point",
- "kind": "Spatial"
- }
- ],
- "path": "/*"
- }
- ],
- "indexingMode": "consistent"
- }
- },
- "offer": {
- "_etag": "\"00006800-0000-0000-0000-5a8271ea0000\"",
- "_rid": "f4V+",
- "_self": "offers/f4V+/",
- "_ts": 1518498282,
- "content": {
- "offerIsRUPerMinuteThroughputEnabled": false,
- "offerThroughput": 400
- },
- "id": "f4V+",
- "offerResourceId": "Es5SAM2FDwA=",
- "offerType": "Invalid",
- "offerVersion": "V2",
- "resource": "dbs/Es5SAA==/colls/Es5SAM2FDwA=/"
- }
-}
-```
+Now that you have the access key for the Cosmos DB account you can pass it to a Cosmos DB SDK and make calls to access the account.
## Next steps
active-directory Tutorial Windows Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-cosmos-db.md
The response gives you the list of Keys. For example, if you get read-only keys
{"primaryReadonlyMasterKey":"bWpDxS...dzQ==", "secondaryReadonlyMasterKey":"38v5ns...7bA=="} ```
-Now that you have the access key for the Cosmos DB account you can pass it to a Cosmos DB SDK and make calls to access the account. For a quick example, you can pass the access key to the Azure CLI. You can get the `<COSMOS DB CONNECTION URL>` from the **Overview** tab on the Cosmos DB account blade in the Azure portal. Replace the `<ACCESS KEY>` with the value you obtained above:
-
-```azurecli
-az cosmosdb collection show -c <COLLECTION ID> -d <DATABASE ID> --url-connection "<COSMOS DB CONNECTION URL>" --key <ACCESS KEY>
-```
-
-This CLI command returns details about the collection:
-
-```output
-{
- "collection": {
- "_conflicts": "conflicts/",
- "_docs": "docs/",
- "_etag": "\"00006700-0000-0000-0000-5a8271e90000\"",
- "_rid": "Es5SAM2FDwA=",
- "_self": "dbs/Es5SAA==/colls/Es5SAM2FDwA=/",
- "_sprocs": "sprocs/",
- "_triggers": "triggers/",
- "_ts": 1518498281,
- "_udfs": "udfs/",
- "id": "Test",
- "indexingPolicy": {
- "automatic": true,
- "excludedPaths": [],
- "includedPaths": [
- {
- "indexes": [
- {
- "dataType": "Number",
- "kind": "Range",
- "precision": -1
- },
- {
- "dataType": "String",
- "kind": "Range",
- "precision": -1
- },
- {
- "dataType": "Point",
- "kind": "Spatial"
- }
- ],
- "path": "/*"
- }
- ],
- "indexingMode": "consistent"
- }
- },
- "offer": {
- "_etag": "\"00006800-0000-0000-0000-5a8271ea0000\"",
- "_rid": "f4V+",
- "_self": "offers/f4V+/",
- "_ts": 1518498282,
- "content": {
- "offerIsRUPerMinuteThroughputEnabled": false,
- "offerThroughput": 400
- },
- "id": "f4V+",
- "offerResourceId": "Es5SAM2FDwA=",
- "offerType": "Invalid",
- "offerVersion": "V2",
- "resource": "dbs/Es5SAA==/colls/Es5SAM2FDwA=/"
- }
-}
-```
+Now that you have the access key for the Cosmos DB account you can pass it to a Cosmos DB SDK and make calls to access the account.
## Disable [!INCLUDE [msi-tut-disable](../../../includes/active-directory-msi-tut-disable.md)] -- ## Next steps In this tutorial, you learned how to use a Windows VM system-assigned identity to access Cosmos DB. To learn more about Cosmos DB see:
active-directory Iprova Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/iprova-provisioning-tutorial.md
- Title: 'Tutorial: Configure iProva for automatic user provisioning with Azure Active Directory | Microsoft Docs'
-description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to iProva.
--
-writer: twimmers
----- Previously updated : 10/29/2019---
-# Tutorial: Configure iProva for automatic user provisioning
-
-The objective of this tutorial is to demonstrate the steps to be performed in iProva and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [iProva](https://www.iProva.com/). For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). Before you attempt to use this tutorial, be sure that you know and meet all requirements. If you have questions, please contact Infoland.
-
-> [!NOTE]
-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
--
-## Capabilities supported
-> [!div class="checklist"]
-> * Create users in iProva
-> * Remove/disable users in iProva when they do not require access anymore
-> * Keep user attributes synchronized between Azure AD and iProva
-> * Provision groups and group memberships in iProva
-> * [Single sign-on](./iprova-tutorial.md) to iProva (recommended)
-
-## Prerequisites
-
-The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
-* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* [An iProva tenant](https://www.iProva.com/).
-* A user account in iProva with Admin permissions.
-
-## Step 1. Plan your provisioning deployment
-1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and iProva](../app-provisioning/customize-application-attributes.md).
-
-## Step 2. Configure iProva to support provisioning with Azure AD
-
-1. Sign in to your [iProva Admin Console](https://www.iProva.com/). Navigate to **Go to > Application Management**.
-
- ![iProva Admin Console](media/iprova-provisioning-tutorial/admin.png)
-
-2. Click on **External user management**.
-
- ![iProva Add SCIM](media/iprova-provisioning-tutorial/external.png)
-
-3. To add a new provider, Click on the **plus** icon. In the new **Add provider** dialog box, provide a **Title**. You can choose to add **IP-based access restriction**. Click on **OK** button.
-
- ![iProva add new](media/iprova-provisioning-tutorial/add.png)
-
- ![iProva add provider](media/iprova-provisioning-tutorial/addprovider.png)
-
-4. Click on **Permanent token** button. Copy the **Permanent token** and save it as this will be the only time you can view it. This value will be entered in the Secret Token field in the Provisioning tab of your iProva application in the Azure portal.
-
- ![iProva Create Token](media/iprova-provisioning-tutorial/token.png)
-
-## Step 3. Add iProva from the Azure AD application gallery
-
-Add iProva from the Azure AD application gallery to start managing provisioning to iProva. If you have previously setup iProva for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
-
-## Step 4. Define who will be in scope for provisioning
-
-The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
-
-## Step 5. Configure automatic user provisioning to iProva
-
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in iProva based on user and/or group assignments in Azure AD.
-
-### To configure automatic user provisioning for iProva in Azure AD:
-
-1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **iProva**.
-
- ![The iProva link in the Applications list](common/all-applications.png)
-
-3. Select the **Provisioning** tab.
-
- ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-
-4. Set the **Provisioning Mode** to **Automatic**.
-
- ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-
-5. In the **Admin Credentials** section, input the **SCIM 2.0 base URL and Permanent Token** values retrieved earlier in the **Tenant URL** and add /scim/ to it. Also add the **Secret Token**. You can generate a secret token in iProva by using the **permanent token** button. Click **Test Connection** to ensure Azure AD can connect to iProva. If the connection fails, ensure your iProva account has Admin permissions and try again.
-
- ![Tenant URL + Token](common/provisioning-testconnection-tenanturltoken.png)
-
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
-
- ![Notification Email](common/provisioning-notification-email.png)
-
-7. Click **Save**.
-
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to iProva**.
-
-9. Review the user attributes that are synchronized from Azure AD to iProva in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in iProva for update operations. Select the **Save** button to commit any changes.
-
- |Attribute|Type|
- |||
- |active|Boolean|
- |displayName|String|
- |emails[type eq "work"].value|String|
- |preferredLanguage|String|
- |userName|String|
- |phoneNumbers[type eq "work"].value|String|
- |externalId|String|
---
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to iProva**.
-
-11. Review the group attributes that are synchronized from Azure AD to iProva in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in iProva for update operations. Select the **Save** button to commit any changes.
-
- |Attribute|Type|
- |||
- |displayName|String|
- |members|Reference|
- |externalID|String|
-
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-
-13. To enable the Azure AD provisioning service for iProva, change the **Provisioning Status** to **On** in the **Settings** section.
-
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-
-14. Define the users and/or groups that you would like to provision to iProva by choosing the desired values in **Scope** in the **Settings** section.
-
- ![Provisioning Scope](common/provisioning-scope.png)
-
-15. When you are ready to provision, click **Save**.
-
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
-
-This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
--
-## Step 6. Monitor your deployment
-Once you've configured provisioning, use the following resources to monitor your deployment:
-
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
-
-## Change log
-
-* 06/17/2020 - Enterprise extension attribute "Manager" has been removed.
-
-## Additional resources
-
-* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
-* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
-
-## Next steps
-
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Zenya Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zenya-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Zenya for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to configure Azure Active Directory to automatically provision and de-provision user accounts to Zenya.
++
+writer: twimmers
+++++ Last updated : 10/29/2019+++
+# Tutorial: Configure Zenya for automatic user provisioning
+
+The objective of this tutorial is to demonstrate the steps to be performed in Zenya and Azure Active Directory (Azure AD) to configure Azure AD to automatically provision and de-provision users and/or groups to [Zenya](https://www.infoland.nl/). For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). Before you attempt to use this tutorial, be sure that you know and meet all requirements. If you have questions, contact Infoland.
+
+> [!NOTE]
+> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++
+## Capabilities supported
+> * Create users in Zenya
+> * Remove/disable users in Zenya when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and Zenya
+> * Provision groups and group memberships in Zenya
+> * [Single sign-on](./zenya-tutorial.md) to Zenya (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* [A Zenya tenant](https://www.infoland.nl/).
+* A user account in Zenya with admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and Zenya](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Zenya to support provisioning with Azure AD
+
+1. Sign in to your [Zenya Admin Console](https://www.infoland.nl/). Navigate to **Go to > Application Management**.
+
+ ![Screenshot showing the Zenya Admin Console.](media/zenya-provisioning-tutorial/admin.png)
+
+2. Select **External user management**.
+
+ ![Screenshot showing the Zenya Users and Groups page with the External user management link highlighted.](media/zenya-provisioning-tutorial/external.png)
+
+3. To add a new provider, select the **plus** icon. In the new **Add provider** dialog box, provide a **Title**. You can choose to add **IP-based access restriction**. Select **OK**.
+
+ ![Screenshot showing the Zenya add new button.](media/zenya-provisioning-tutorial/add.png)
+
+ ![Screenshot showing the Zenya add provider page.](media/zenya-provisioning-tutorial/add-provider.png)
+
+4. Select the **Permanent token** button. Copy the **Permanent token** and save it. You won't be able to view it later. This value will be entered in the Secret Token field in the Provisioning tab of your Zenya application in the Azure portal.
+
+ ![Screenshot showing the Zenya User provisioning page for creating a Token.](media/zenya-provisioning-tutorial/token.png)
+
+## Step 3. Add Zenya from the Azure AD application gallery
+
+Add Zenya from the Azure AD application gallery to start managing provisioning to Zenya. If you have previously setup Zenya for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, maintain control by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+
+## Step 5. Configure automatic user provisioning to Zenya
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Zenya based on user and/or group assignments in Azure AD.
+
+For more information (in dutch) also read: [`Implementatie SCIM koppeling`](https://webshare.iprova.nl/8my7yg8c1ofsmdj9/Document.aspx)
+
+### To configure automatic user provisioning for Zenya in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot showing the Enterprise applications blade.](common/enterprise-applications.png)
+
+2. In the applications list, select **Zenya**.
+
+ ![Screenshot showing Zenya link in the Applications list.](media/zenya-provisioning-tutorial/browse-application.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
+
+5. In the **Admin Credentials** section, input the **SCIM 2.0 base URL and Permanent Token** values retrieved earlier in the **Tenant URL** and add /scim/ to it. Also add the **Secret Token**. You can generate a secret token in Zenya by using the **permanent token** button. Select **Test Connection** to ensure Azure AD can connect to Zenya. If the connection fails, ensure your Zenya account has Admin permissions and try again.
+
+ ![Screenshot showing the Test connection page and fields for Tenant URL and Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
+
+ ![Screenshot showing the field for entering an email address for notification.](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Zenya**.
+
+9. Review the user attributes that are synchronized from Azure AD to Zenya in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Zenya for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|
+ |||
+ |active|Boolean|
+ |displayName|String|
+ |emails[type eq "work"].value|String|
+ |preferredLanguage|String|
+ |userName|String|
+ |phoneNumbers[type eq "work"].value|String|
+ |externalId|String|
+++
+10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Zenya**.
+
+11. Review the group attributes that are synchronized from Azure AD to Zenya in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Zenya for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|
+ |||
+ |displayName|String|
+ |members|Reference|
+ |externalID|String|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for Zenya, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot showing the provisioning status toggled on.](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to Zenya by choosing the desired values in **Scope** in the **Settings** section. You'll need a P1 or P2 license in order to allow provisioning assigned users and groups.
+
+ ![Screenshot showing where to select the provisioning scope.](common/provisioning-scope.png)
+
+15. When you're ready to provision, select **Save**.
+
+ ![Screenshot showing the Save button to save the provisioning configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
++
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Change log
+
+* 06/17/2020 - Enterprise extension attribute "Manager" has been removed.
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [`Implementatie SCIM koppeling`](https://webshare.iprova.nl/8my7yg8c1ofsmdj9/Document.aspx)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Zenya Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zenya-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Zenya | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Zenya.
++++++++ Last updated : 09/01/2021+++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Zenya
+
+In this tutorial, you'll learn how to integrate Zenya with Azure Active Directory (Azure AD). When you integrate Zenya with Azure AD, you can:
+
+* Control in Azure AD who has access to Zenya.
+* Enable your users to be automatically signed-in to Zenya with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Zenya single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Zenya supports **SP** initiated SSO.
+* Zenya supports [Automated user provisioning](zenya-provisioning-tutorial.md).
+
+## Add Zenya from the gallery
+
+To configure the integration of Zenya into Azure AD, you need to add Zenya from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zenya** in the search box.
+1. Select **Zenya** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Zenya
+
+Configure and test Azure AD SSO with Zenya using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zenya.
+
+To configure and test Azure AD SSO with Zenya, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zenya SSO](#configure-zenya-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zenya test user](#create-zenya-test-user)** - to have a counterpart of B.Simon in Zenya that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Retrieve configuration information from Zenya
+
+In this section, you retrieve information from Zenya to configure Azure AD single sign-on.
+
+1. Open a web browser and go to the **SAML2 info** page in Zenya by using the following URL patterns:
+
+ `https://<SUBDOMAIN>.zenya.work/saml2info`
+ `https://<SUBDOMAIN>.iprova.nl/saml2info`
+ `https://<SUBDOMAIN>.iprova.be/saml2info`
+ `https://<SUBDOMAIN>.iprova.eu/saml2info`
+
+ ![Screenshot of the Zenya SAML2 information page.](media/zenya-tutorial/information.png)
+
+1. Leave the browser tab open while you proceed with the next steps in another browser tab.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Zenya** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot of the page for editing the basic SAML configuration.](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. Fill the **Sign-on URL** box with the value that's displayed behind the label **Sign-on URL** on the **Zenya SAML2 info** page. This page is still open in your other browser tab.
+
+ b. Fill the **Identifier** box with the value that's displayed behind the label **EntityID** on the **Zenya SAML2 info** page. This page is still open in your other browser tab.
+
+ c. Fill the **Reply-URL** box with the value that's displayed behind the label **Reply URL** on the **Zenya SAML2 info** page. This page is still open in your other browser tab.
+
+1. Zenya application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot showing the list of default attributes.](common/default-attributes.png)
+
+1. In addition to above, Zenya application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute| Namespace |
+ | | -- | --|
+ | `samaccountname` | `user.onpremisessamaccountname`| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims`|
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot showing SAML Signing Certificate information including a download link.](common/copy-metadataurl.png)
+
+## Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+## Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zenya.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zenya**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Zenya SSO
+
+1. Sign in to Zenya by using the **Administrator** account.
+
+2. Open the **Go to** menu.
+
+3. Select **Application management**.
+
+4. Select **General** in the **System settings** panel.
+
+5. Select **Edit**.
+
+6. Scroll down to **Access control**.
+
+ ![Screenshot showing Zenya Access control settings.](media/zenya-tutorial/access-control.png)
+
+7. Find the setting **Users are automatically logged on with their network accounts**, and change it to **Yes, authentication via SAML**. Additional options now appear.
+
+8. Select **Set up**.
+
+9. Select **Next**.
+
+10. Zenya asks if you want to download federation data from a URL or upload it from a file. Select the **From URL** option.
+
+ ![Screenshot showing page for entering the URL for downloading Azure AD metadata](media/zenya-tutorial/metadata.png)
+
+11. Paste the metadata URL you saved in the last step of the "Configure Azure AD single sign-on" section.
+
+12. Select the arrow-shaped button to download the metadata from Azure AD.
+
+13. When the download is complete, the confirmation message **Valid Federation Data file downloaded** appears.
+
+14. Select **Next**.
+
+15. Skip the **Test login** option for now, and select **Next**.
+
+16. In the **Claim to use** drop-down box, select **windowsaccountname**.
+
+17. Select **Finish**.
+
+18. You now return to the **Edit general settings** screen. Scroll down to the bottom of the page, and select **OK** to save your configuration.
+
+## Create Zenya test user
+
+1. Sign in to Zenya by using the **Administrator** account.
+
+2. Open the **Go to** menu.
+
+3. Select **Application management**.
+
+4. Select **Users** in the **Users and user groups** panel.
+
+5. Select **Add**.
+
+6. In the **Username** box, enter the username of user like `B.Simon@contoso.com`.
+
+7. In the **Full name** box, enter a full name of user like **B.Simon**.
+
+8. Select the **No password (use single sign-on)** option.
+
+9. In the **E-mail address** box, enter the email address of user like `B.Simon@contoso.com`.
+
+10. Scroll down to the end of the page, and select **Finish**.
+
+> [!NOTE]
+> Zenya also supports automatic user provisioning, you can find more details [here](./zenya-provisioning-tutorial.md) on how to configure automatic user provisioning.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Zenya Sign-on URL where you can initiate the login flow.
+
+* Go to Zenya Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Zenya tile in the My Apps, this will redirect to Zenya Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+
+## Next steps
+
+Once you configure Zenya you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
aks Concepts Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-scale.md
To get started with manually scaling pods and nodes see [Scale applications in A
## Horizontal pod autoscaler
-Kubernetes uses the horizontal pod autoscaler (HPA) to monitor the resource demand and automatically scale the number of replicas. By default, the horizontal pod autoscaler checks the Metrics API every 60 seconds for any required changes in replica count. When changes are required, the number of replicas is increased or decreased accordingly. Horizontal pod autoscaler works with AKS clusters that have deployed the Metrics Server for Kubernetes 1.8+.
+Kubernetes uses the horizontal pod autoscaler (HPA) to monitor the resource demand and automatically scale the number of replicas. By default, the horizontal pod autoscaler checks the Metrics API every 15 seconds for any required changes in replica count, but the Metrics API retrieves data from the Kubelet every 60 seconds. Effectively, the HPA is updated every 60 seconds. When changes are required, the number of replicas is increased or decreased accordingly. Horizontal pod autoscaler works with AKS clusters that have deployed the Metrics Server for Kubernetes 1.8+.
![Kubernetes horizontal pod autoscaling](media/concepts-scale/horizontal-pod-autoscaling.png)
To get started with the horizontal pod autoscaler in AKS, see [Autoscale pods in
### Cooldown of scaling events
-As the horizontal pod autoscaler checks the Metrics API every 30 seconds, previous scale events may not have successfully completed before another check is made. This behavior could cause the horizontal pod autoscaler to change the number of replicas before the previous scale event could receive application workload and the resource demands to adjust accordingly.
+As the horizontal pod autoscaler is effectively updated every 60 seconds, previous scale events may not have successfully completed before another check is made. This behavior could cause the horizontal pod autoscaler to change the number of replicas before the previous scale event could receive application workload and the resource demands to adjust accordingly.
To minimize race events, a delay value is set. This value defines how long the horizontal pod autoscaler must wait after a scale event before another scale event can be triggered. This behavior allows the new replica count to take effect and the Metrics API to reflect the distributed workload. There is [no delay for scale-up events as of Kubernetes 1.12](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay), however the delay on scale down events is defaulted to 5 minutes.
aks Dapr Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-migration.md
+
+ Title: Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS)
+description: Learn how to migrate from Dapr OSS to the Dapr extension for AKS
+++++ Last updated : 07/21/2022+++
+# Migrate from Dapr OSS to the Dapr extension for Azure Kubernetes Service (AKS)
+
+You've installed and configured Dapr OSS on your Kubernetes cluster and want to migrate to the Dapr extension on AKS. Before you can successfully migrate to the Dapr extension, you need to fully remove Dapr OSS from your AKS cluster. In this guide, you will migrate from Dapr OSS by:
+
+> [!div class="checklist"]
+> - Uninstalling Dapr, including CRDs and the `dapr-system` namespace
+> - Installing Dapr via the Dapr extension for AKS
+> - Applying your components
+> - Restarting your applications that use Dapr
+
+> [!NOTE]
+> Expect downtime of approximately 10 minutes while migrating to Dapr extension for AKS. Downtime may take longer depending on varying factors. During this downtime, no Dapr functionality should be expected to run.
+
+## Uninstall Dapr
+
+#### [Dapr CLI](#tab/cli)
+
+1. Run the following command to uninstall Dapr and all CRDs:
+
+```bash
+dapr uninstall -k ΓÇô-all
+```
+
+1. Uninstall the Dapr namespace:
+
+```bash
+kubectl delete namespace dapr-system
+```
+
+> [!NOTE]
+> `dapr-system` is the default namespace installed with `dapr init -k`. If you created a custom namespace, replace `dapr-system` with your namespace.
+
+#### [Helm](#tab/helm)
+
+1. Run the following command to uninstall Dapr:
+
+```bash
+dapr uninstall -k ΓÇô-all
+```
+
+1. Uninstall CRDs:
+
+```bash
+kubectl delete crd components.dapr.io
+kubectl delete crd configurations.dapr.io
+kubectl delete crd subscriptions.dapr.io
+kubectl delete crd resiliencies.dapr.io
+```
+
+1. Uninstall the Dapr namespace:
+
+```bash
+kubectl delete namespace dapr-system
+```
+
+> [!NOTE]
+> `dapr-system` is the default namespace while doing a Helm install. If you created a custom namespace (`helm install dapr dapr/dapr --namespace <my-namespace>`), replace `dapr-system` with your namespace.
+++
+## Install Dapr via the AKS extension
+
+Once you've uninstalled Dapr from your system, install the [Dapr extension for AKS and Arc-enabled Kubernetes](./dapr.md#create-the-extension-and-install-dapr-on-your-aks-or-arc-enabled-kubernetes-cluster).
+
+```bash
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name <dapr-cluster-name> \
+--resource-group <dapr-resource-group> \
+--name <dapr-ext> \
+--extension-type Microsoft.Dapr
+```
+
+## Apply your components
+
+```bash
+kubectl apply -f <component.yaml>
+```
+
+## Restart your applications that use Dapr
+
+Restarting the deployment will create a new sidecar from the new Dapr installation.
+
+```bash
+kubectl rollout restart <deployment-name>
+```
+
+## Next steps
+
+Learn more about [the cluster extension](./dapr-overview.md) and [how to use it](./dapr.md).
aks Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-overview.md
description: Learn more about using Dapr on your Azure Kubernetes Service (AKS)
Previously updated : 05/03/2022 Last updated : 07/21/2022
When installing Dapr OSS via helm or the Dapr CLI, runtime versions and configur
Lastly, the Dapr extension is an extension of AKS, therefore you can expect the same support policy as other AKS features.
+[Learn more about migrating from Dapr OSS to the Dapr extension for AKS][dapr-migration].
+ ### How can I switch to using the Dapr extension if IΓÇÖve already installed Dapr via a method, such as Helm? Recommended guidance is to completely uninstall Dapr from the AKS cluster and reinstall it via the cluster extension.
After learning about Dapr and some of the challenges it solves, try [Deploying a
[osm-docs]: ./open-service-mesh-about.md [cluster-extensions]: ./cluster-extensions.md [dapr-quickstart]: ./quickstart-dapr.md
+[dapr-migration]: ./dapr-migration.md
<!-- Links External --> [dapr-docs]: https://docs.dapr.io/
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Previously updated : 05/16/2022 Last updated : 07/21/2022 # Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
-[Dapr](https://dapr.io/) is a portable, event-driven runtime that makes it easy for any developer to build resilient, stateless and stateful applications that run on the cloud and edge and embraces the diversity of languages and developer frameworks. Leveraging the benefits of a sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. In particular, it helps with solving problems around services calling other services reliably and securely, building event-driven apps with pub-sub, and building applications that are portable across multiple cloud services and hosts (e.g., Kubernetes vs. a VM).
+[Dapr](https://dapr.io/) is a portable, event-driven runtime that simplifies building resilient, stateless, and stateful applications that run on the cloud and edge and embrace the diversity of languages and developer frameworks. Applying the benefits of a sidecar architecture, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. In particular, it helps solve problems around
+- Calling other services reliably and securely
+- Building event-driven apps with pub-sub
+- Building applications that are portable across multiple cloud services and hosts (for example, Kubernetes vs. a VM)
By using the Dapr extension to provision Dapr on your AKS or Arc-enabled Kubernetes cluster, you eliminate the overhead of downloading Dapr tooling and manually installing and managing the runtime on your AKS cluster. Additionally, the extension offers support for all [native Dapr configuration capabilities][dapr-configuration-options] through simple command-line arguments. > [!NOTE]
-> If you plan on installing Dapr in a Kubernetes production environment, please see the [Dapr guidelines for production usage][kubernetes-production] documentation page.
+> If you plan on installing Dapr in a Kubernetes production environment, see the [Dapr guidelines for production usage][kubernetes-production] documentation page.
## How it works
The Dapr extension uses the Azure CLI to provision the Dapr control plane on you
- **dapr-operator**: Manages component updates and Kubernetes services endpoints for Dapr (state stores, pub/subs, etc.) - **dapr-sidecar-injector**: Injects Dapr into annotated deployment pods and adds the environment variables `DAPR_HTTP_PORT` and `DAPR_GRPC_PORT` to enable user-defined applications to easily communicate with Dapr without hard-coding Dapr port values. - **dapr-placement**: Used for actors only. Creates mapping tables that map actor instances to pods-- **dapr-sentry**: Manages mTLS between services and acts as a certificate authority. For more information read the [security overview][dapr-security].
+- **dapr-sentry**: Manages mTLS between services and acts as a certificate authority. For more information, read the [security overview][dapr-security].
-Once Dapr is installed on your cluster, you can begin to develop using the Dapr building block APIs by [adding a few annotations][dapr-deployment-annotations] to your deployments. For a more in-depth overview of the building block APIs and how to best use them, please see the [Dapr building blocks overview][building-blocks-concepts].
+Once Dapr is installed on your cluster, you can begin to develop using the Dapr building block APIs by [adding a few annotations][dapr-deployment-annotations] to your deployments. For a more in-depth overview of the building block APIs and how to best use them, see the [Dapr building blocks overview][building-blocks-concepts].
> [!WARNING] > If you install Dapr through the AKS or Arc-enabled Kubernetes extension, our recommendation is to continue using the extension for future management of Dapr instead of the Dapr CLI. Combining the two tools can cause conflicts and result in undesired behavior.
The Dapr extension support varies depending on how you manage the runtime.
**Self-managed** For self-managed runtime, the Dapr extension supports:-- [The latest version of Dapr and 2 previous versions (N-2)][dapr-supported-version]
+- [The latest version of Dapr and two previous versions (N-2)][dapr-supported-version]
- Upgrading minor version incrementally (for example, 1.5 -> 1.6 -> 1.7) Self-managed runtime requires manual upgrade to remain in the support window. To upgrade Dapr via the extension, follow the [Update extension instance instructions][update-extension].
Global Azure cloud is supported with Arc support on the regions listed by [Azure
### Set up the Azure CLI extension for cluster extensions
-You will need the `k8s-extension` Azure CLI extension. Install this by running the following commands:
+You'll need the `k8s-extension` Azure CLI extension. Install by running the following commands:
```azurecli-interactive az extension add --name k8s-extension
When installing the Dapr extension, use the flag value that corresponds to your
- **AKS cluster**: `--cluster-type managedClusters`. - **Arc-enabled Kubernetes cluster**: `--cluster-type connectedClusters`.
+> [!NOTE]
+> If you're using Dapr OSS on your AKS cluster and would like to install the Dapr extension for AKS, read more about [how to successfully migrate to the Dapr extension][dapr-migration].
+ Create the Dapr extension, which installs Dapr on your AKS or Arc-enabled Kubernetes cluster. For example, for an AKS cluster: ```azure-cli-interactive
If no configuration-settings are passed, the Dapr configuration defaults to:
allowedClockSkew: 15m ```
-For a list of available options, please see [Dapr configuration][dapr-configuration-options].
+For a list of available options, see [Dapr configuration][dapr-configuration-options].
## Targeting a specific Dapr version
az k8s-extension create --cluster-type managedClusters \
--version X.X.X ```
-## Limiting the extension to certain nodes (`nodeSelector`)
+## Limiting the extension to certain nodes
+
+In some configurations, you may only want to run Dapr on certain nodes. You can limit the extension by passing a `nodeSelector` in the extension configuration. If the desired `nodeSelector` contains `.`, you must escape them from the shell and the extension. For example, the following configuration will install Dapr to only nodes with `topology.kubernetes.io/zone: "us-east-1c"`:
+
+```azure-cli-interactive
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name myDaprExtension \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version true \
+--configuration-settings "global.ha.enabled=true" \
+--configuration-settings "dapr_operator.replicaCount=2" \
+--configuration-settings "global.nodeSelector.kubernetes\.io/zone: us-east-1c"
+```
-In some configurations you may only want to run Dapr on certain nodes. This can be accomplished by passing a `nodeSelector` in the extension configuration. Note that if the desired `nodeSelector` contains `.`, you must escape them from the shell and the extension. For example, the following configuration will install Dapr to only nodes with `kubernetes.io/os=linux`:
+For managing OS and architecture, use the [supported versions](https://github.com/dapr/dapr/blob/b8ae13bf3f0a84c25051fcdacbfd8ac8e32695df/docker/docker.mk#L50) of the `global.daprControlPlaneOs` and `global.daprControlPlaneArch` configuration:
```azure-cli-interactive az k8s-extension create --cluster-type managedClusters \
az k8s-extension create --cluster-type managedClusters \
--auto-upgrade-minor-version true \ --configuration-settings "global.ha.enabled=true" \ --configuration-settings "dapr_operator.replicaCount=2" \configuration-settings "global.nodeSelector.kubernetes\.io/os=linux"
+--configuration-settings "global.daprControlPlaneOs=linuxΓÇ¥ \
+--configuration-settings "global.daprControlPlaneArch=amd64ΓÇ¥
``` ## Show current configuration settings
az k8s-extension show --cluster-type managedClusters \
## Update configuration settings > [!IMPORTANT]
-> Some configuration options cannot be modified post-creation. Adjustments to these options require deletion and recreation of the extension. This is applicable to the following settings:
+> Some configuration options cannot be modified post-creation. Adjustments to these options require deletion and recreation of the extension, applicable to the following settings:
> * `global.ha.*` > * `dapr_placement.*` > [!NOTE] > High availability (HA) can be enabled at any time. However, once enabled, disabling it requires deletion and recreation of the extension. If you aren't sure if high availability is necessary for your use case, we recommend starting with it disabled to minimize disruption.
-To update your Dapr configuration settings, simply recreate the extension with the desired state. For example, assume we have previously created and installed the extension using the following configuration:
+To update your Dapr configuration settings, recreate the extension with the desired state. For example, assume we've previously created and installed the extension using the following configuration:
```azurecli-interactive az k8s-extension create --cluster-type managedClusters \
az k8s-extension create --cluster-type managedClusters \
--configuration-settings "dapr_operator.replicaCount=2" ```
-To update the `dapr_operator.replicaCount` from 2 to 3, use the following:
+To update the `dapr_operator.replicaCount` from two to three, use the following command:
```azurecli-interactive az k8s-extension create --cluster-type managedClusters \
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[arc-k8s-cluster]: ../azure-arc/kubernetes/quickstart-connect-cluster.md [update-extension]: ./cluster-extensions.md#update-extension-instance [install-cli]: /cli/azure/install-azure-cli
+[dapr-migration]: ./dapr-migration.md
<!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
When deployed to App Service, Python apps run within a Linux Docker container th
This container has the following characteristics: - Apps are run using the [Gunicorn WSGI HTTP Server](https://gunicorn.org/), using the additional arguments `--bind=0.0.0.0 --timeout 600`.
- - You can provide configuration settings for Gunicorn through a *gunicorn.conf.py* file in the project root, as described on [Gunicorn configuration overview](https://docs.gunicorn.org/en/stable/configure.html#configuration-file) (docs.gunicorn.org). You can alternately [customize the startup command](#customize-startup-command).
+ - You can provide configuration settings for Gunicorn by [customizing the startup command](#customize-startup-command).
- To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described on [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html) (docs.gunicorn.org).
Again, if you expect to see a deployed app instead of the default app, see [Trou
## Customize startup command
-As noted earlier in this article, you can provide configuration settings for Gunicorn through a *gunicorn.conf.py* file in the project root, as described on [Gunicorn configuration overview](https://docs.gunicorn.org/en/stable/configure.html#configuration-file).
-
-If such configuration is not sufficient, you can control the container's startup behavior by providing either a custom startup command or multiple commands in a startup command file. A startup command file can use whatever name you choose, such as *startup.sh*, *startup.cmd*, *startup.txt*, and so on.
+You can control the container's startup behavior by providing either a custom startup command or multiple commands in a startup command file. A startup command file can use whatever name you choose, such as *startup.sh*, *startup.cmd*, *startup.txt*, and so on.
All commands must use relative paths to the project root folder.
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
If you use Azure Key Vault to manage your certificates, you can import a PKCS12
### Authorize App Service to read from the vault By default, the App Service resource provider doesnΓÇÖt have access to the Key Vault. In order to use a Key Vault for a certificate deployment, you need to [authorize the resource provider read access to the KeyVault](../key-vault/general/assign-access-policy-cli.md).
-`abfa0a7c-a6b6-4736-8310-5855508787cd` is the resource provider service principal name for App Service, and it's the same for all Azure subscriptions. For Azure Government cloud environment, use `6a02c803-dafd-4136-b4c3-5a6f318b4714` instead as the resource provider service principal name.
+| Resource Provider | Service Principal AppId | KeyVault secret permissions | KeyVault certificate permissions |
+|--|--|--|--|
+| `Microsoft Azure App Service` or `Microsoft.Azure.WebSites` | `abfa0a7c-a6b6-4736-8310-5855508787cd` (It's the same for all Azure subscriptions)<br/><br/>For Azure Government cloud environment, use `6a02c803-dafd-4136-b4c3-5a6f318b4714`. | Get | Get |
+| Microsoft.Azure.CertificateRegistration | | Get<br/>List<br/>Set<br/>Delete | Get<br/>List |
> [!NOTE] > Currently, Key Vault Certificate only supports Key Vault access policy but not RBAC model.
app-service Overview Hosting Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-hosting-plans.md
Since you pay for the computing resources your App Service plan allocates (see [
Isolate your app into a new App Service plan when: -- The app is resource-intensive.
+- The app is resource-intensive. The number may actually be lower depending on how resource intensive the hosted applications are, however as a general guidance, you may refer to the table below:
+
+ | App Service Plan SKU | Max Apps |
+ |--|--|
+ | B1, S1, P1v2, I1v1 | 8 |
+ | B2, S2, P2v2, I2v1 | 16 |
+ | B3, S3, P3v2, I3v1 | 32 |
+ | P1v3, I1v2 | 16 |
+ | P2v3, I2v2 | 32 |
+ | P3v3, I3v2 | 64 |
+ - You want to scale the app independently from the other apps in the existing plan. - The app needs resource in a different geographical region.
This way you can allocate a new set of resources for your app and gain greater c
## Manage an App Service plan > [!div class="nextstepaction"]
-> [Manage an App Service plan](app-service-plan-manage.md)
+> [Manage an App Service plan](app-service-plan-manage.md)
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
The commands effectively adds a `loginParameters` property with additional custo
> For Linux apps, There's a temporary requirement to configure a versioning setting for the back-end app registration. In the Cloud Shell, configure it with the following commands. Be sure to replace *\<back-end-client-id>* with your back end's client ID. > > ```azurecli-interactive
-> id=$(az ad app show --id <back-end-client-id> --query objectId --output tsv)
+> id=$(az ad app show --id <back-end-client-id> --query id --output tsv)
> az rest --method PATCH --url https://graph.microsoft.com/v1.0/applications/$id --body "{'api':{'requestedAccessTokenVersion':2}}" > ```
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
Here's an example of the output:
sqlcmd -S <server-name>.database.windows.net -d <db-name> -U <aad-user-name> -P "<aad-password>" -G -l 30 ```
-1. In the SQL prompt for the database you want, run the following commands to grant the permissions your app needs. For example,
+1. In the SQL prompt for the database you want, run the following commands to grant the minimum permissions your app needs. For example,
```sql CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
application-gateway Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-powershell.md
description: In this quickstart, you learn how to use Azure PowerShell to create
Previously updated : 06/14/2021 Last updated : 07/21/2022
automation Extension Based Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md
To help troubleshoot issues with extension-based Hybrid Runbook Workers:
- Check whether the system-assigned managed identity is enabled on the VM. Azure VMs and Arc enabled Azure Machines should be enabled with a system-assigned managed identity. - Check whether the extension is enabled with the right settings. Setting file should have right `AutomationAccountURL`. Cross-check the URL with Automation account property - `AutomationHybridServiceUrl`. 
- - For windows: you can find the settings file at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin`.
+ - For windows: you can find the settings file at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\RuntimeSettings`.
- For Linux: you can find the settings file at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/`. - Check the error message shown in the Hybrid worker extension status/Detailed Status. It contains error message(s) and respective recommendation(s) to fix the issue.
If you don't see your problem here or you can't resolve your issue, try one of t
* Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.
-* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
+* File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/), and select **Get Support**.
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
The following table lists the supported operating systems for update assessments
All operating systems are assumed to be x64. x86 is not supported for any operating system. > [!NOTE]
-> Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+> - Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+> - Update Management does not support CIS hardened images.
-### Windows
+# [Windows operating system](#tab/os-win)
|Operating system |Notes | ||| |Windows Server 2019 (Datacenter/Standard including Server Core)<br><br>Windows Server 2016 (Datacenter/Standard excluding Server Core)<br><br>Windows Server 2012 R2(Datacenter/Standard)<br><br>Windows Server 2012 | | |Windows Server 2008 R2 (RTM and SP1 Standard)| Update Management supports assessments and patching for this operating system. The [Hybrid Runbook Worker](../automation-windows-hrw-install.md) is supported for Windows Server 2008 R2. |
-### Linux
+# [Linux operating system](#tab/os-linux)
+
+> [!NOTE]
+> Update assessment of Linux machines is only supported in certain regions as listed in the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+ |Operating system |Notes | ||| |CentOS 6, 7, and 8 | Linux agents require access to an update repository. Classification-based patching requires `yum` to return security data that CentOS doesn't have in its RTM releases. For more information on classification-based patching on CentOS, see [Update classifications on Linux](view-update-assessments.md#linux). |
All operating systems are assumed to be x64. x86 is not supported for any operat
|SUSE Linux Enterprise Server 12, 15, and 15.1 | Linux agents require access to an update repository. | |Ubuntu 14.04 LTS, 16.04 LTS, 18.04 LTS, and 20.04 LTS |Linux agents require access to an update repository. | ++ > [!NOTE]
-> Update Management does not support safely automating update management across all instances in an Azure virtual machine scale set. [Automatic OS image upgrades](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) is the recommended method for managing OS image upgrades on your scale set.
+> Update Management does not support automating update management across all instances in an Azure virtual machine scale set. [Automatic OS image upgrades](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) is the recommended method for managing OS image upgrades on your scale set.
## Unsupported operating systems
The following table lists operating systems not supported by Update Management:
## System requirements
-The following information describes operating system-specific requirements. For additional guidance, see [Network planning](plan-deployment.md#ports). To understand requirements for TLS 1.2, see [TLS 1.2 for Azure Automation](../automation-managing-data.md#tls-12-for-azure-automation).
+The section describes operating system-specific requirements. For additional guidance, see [Network planning](plan-deployment.md#ports). To understand requirements for TLS 1.2, see [TLS 1.2 for Azure Automation](../automation-managing-data.md#tls-12-for-azure-automation).
-### Windows
+# [Windows](#tab/sr-win)
-Software Requirements:
+**Software Requirements**:
- .NET Framework 4.6 or later is required. ([Download the .NET Framework](/dotnet/framework/install/guide-for-developers). - Windows PowerShell 5.1 is required ([Download Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).) - The Update Management feature depends on the system Hybrid Runbook Worker role, and you should confirm its [system requirements](../automation-windows-hrw-install.md#prerequisites).
-Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Windows Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+Windows Update agents must be configured to communicate with a Windows Server Update Services (WSUS) server, or they require access to Microsoft Update. For hybrid machines, we recommend installing the Log Analytics agent for Windows by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Microsoft Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with VM insights, instead use the [Enable Enable VM insights](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
You can use Update Management with Microsoft Endpoint Configuration Manager. To learn more about integration scenarios, see [Integrate Update Management with Windows Endpoint Configuration Manager](mecmintegration.md). The [Log Analytics agent for Windows](../../azure-monitor/agents/agent-windows.md) is required for Windows servers managed by sites in your Configuration Manager environment.
By default, Windows VMs that are deployed from Azure Marketplace are set to rece
> [!NOTE] > You can modify Group Policy so that machine reboots can be performed only by the user, not by the system. Managed machines can get stuck if Update Management doesn't have rights to reboot the machine without manual interaction from the user. For more information, see [Configure Group Policy settings for Automatic Updates](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates).
-### Linux
+# [Linux](#tab/sr-linux)
-Software Requirements:
+**Software Requirements**:
-- The machine requires access to an update repository, either private or public.
+- The machine requires access to an update repository - private or public.
- TLS 1.1 or TLS 1.2 is required to interact with Update Management. - The Update Management feature depends on the system Hybrid Runbook Worker role, and you should confirm its [system requirements](../automation-linux-hrw-install.md#prerequisites). Because Update Management uses Automation runbooks to initiate assessment and update of your machines, review the [version of Python required](../automation-linux-hrw-install.md#supported-runbook-types) for your supported Linux distro. > [!NOTE]
-> Update assessment of Linux machines is only supported in certain regions. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+> Update assessment of Linux machines is supported in certain regions only. See the Automation account and Log Analytics workspace [mappings table](../how-to/region-mappings.md#supported-mappings).
+
-For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, if you plan to monitor the machines with Azure Monitor for VMs, instead use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) initiative.
+For hybrid machines, we recommend installing the Log Analytics agent for Linux by first connecting your machine to [Azure Arc-enabled servers](../../azure-arc/servers/overview.md), and then use Azure Policy to assign the [Deploy Log Analytics agent to Linux Azure Arc machines](../../governance/policy/samples/built-in-policies.md#monitoring) built-in policy definition. Alternatively, to monitor the machines use the [Enable Azure Monitor for VMs](../../governance/policy/samples/built-in-initiatives.md#monitoring) instead of Azure Monitor for VMs.
## Next steps
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Batch](../batch/create-pool-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Cache for Redis](migrate-cache-redis.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Cognitive Search](../search/search-performance-optimization.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Container Instances](../container-instances/container-instances-region-availability.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Container Instances](migrate-container-instances.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
| [Azure Container Registry](../container-registry/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Explorer](/azure/data-explorer/create-cluster-database-portal) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Data Factory](../data-factory/concepts-data-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
availability-zones Migrate Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-container-instances.md
+
+ Title: Migrate Azure Container Instances to availability zone support
+description: Learn how to migrate Azure Container Instances to availability zone support.
+++ Last updated : 07/22/2022+++++
+# Migrate Azure Container Instances to availability zone support
+
+This guide describes how to migrate Azure Container Instances from non-availability zone support to availability support.
++
+## Prerequisites
+
+* If using Azure CLI, ensure version 2.30.0 or later
+* If using PowerShell, ensure version 2.1.1-preview or later
+* If using the Java SDK, ensure version 2.9.0 or later
+* ACI API version 09-01-2021
+* Make sure the region you're migrating to supports zonal container group deployments. To view a list of supported regions, see [Resource availability for Azure Container Instances in Azure regions](../container-instances/container-instances-region-availability.md).
+
+## Considerations
+
+The following container groups don't support availability zones, and don't offer any migration guidance:
+
+- Container groups with GPU resources
+- Virtual Network injected container groups
+- Windows Server 2016 container groups
+
+## Downtime requirements
+
+Because ACI requires you to delete your existing deployment and recreate it with zonal support, the downtime is the time it takes to make a new deployment.
+
+## Migration guidance: Delete and redeploy container group
+
+To delete and redeploy a container group:
+
+1. Delete your current container group with one of the following tools:
+
+ - [Azure CLI](../container-instances/container-instances-quickstart.md#clean-up-resources)
+ - [PowerShell](../container-instances/container-instances-quickstart.md#clean-up-resources),
+ - [Portal](../container-instances/container-instances-quickstart-portal.md#clean-up-resources).
+
+ >[!NOTE]
+ >Zonal support is not supported in the Azure portal. Even if you delete your container group through the portal, you'll still need to create your new container group using CLI or Powershell.
+
+1. Follow the steps in [Deploy an Azure Container Instances (ACI) container group in an availability zone (preview)](../container-instances/availability-zones.md).
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Regions and Availability Zones in Azure](az-overview.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](az-region.md)
+
azure-app-configuration Howto Disable Access Key Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-disable-access-key-authentication.md
Title: Disable access key authentication for an Azure App Configuration instance (preview)
+ Title: Disable access key authentication for an Azure App Configuration instance
-description: Learn how to disable access key authentication for an Azure App Configuration instance (preview)
+description: Learn how to disable access key authentication for an Azure App Configuration instance
Last updated 5/14/2021
-# Disable access key authentication for an Azure App Configuration instance (preview)
+# Disable access key authentication for an Azure App Configuration instance
Every request to an Azure App Configuration resource must be authenticated. By default, requests can be authenticated with either Azure Active Directory (Azure AD) credentials, or by using an access key. Of these two types of authentication schemes, Azure AD provides superior security and ease of use over access keys, and is recommended by Microsoft. To require clients to use Azure AD to authenticate requests, you can disable the usage of access keys for an Azure App Configuration resource.
Be careful to restrict assignment of these roles only to those who require the a
## Limitations
-The capability to disable access key authentication is available as a preview. The following limitations are currently in place.
+The capability to disable access key authentication has the following limitation:
### ARM template access
azure-app-configuration Howto Recover Deleted Stores In Azure App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-recover-deleted-stores-in-azure-app-configuration.md
Title: Recover Azure App Configuration stores (Preview)
+ Title: Recover Azure App Configuration stores
description: Recover/Purge Azure App Configuration soft deleted Stores
Last updated 03/01/2022
-# Recover Azure App Configuration stores (Preview)
+# Recover Azure App Configuration stores
This article covers the soft delete feature of Azure App Configuration stores. You'll learn about how to set retention policy, enable purge protection, recover and purge a soft-deleted store.
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Creating the data controller has the following high level steps:
> [!NOTE] > For simplicity, the steps below assume that you are a Kubernetes cluster administrator. For production deployments or more secure environments, it is recommended to follow the security best practices of "least privilege" when deploying the data controller by granting only specific permissions to users and service accounts involved in the deployment process.
+>
+> See the topic [Operate Arc-enabled data services with least privileges](least-privilege.md) for detailed instructions.
## Prerequisites
azure-arc Least Privilege https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/least-privilege.md
+
+ Title: Operate Azure Arc-enabled data services with least privileges
+description: Explains how to operate Azure Arc-enabled data services with least privileges
++++++ Last updated : 11/07/2021+++
+# Operate Azure Arc-enabled data services with least privileges
+
+Operating Arc-enabled data services with least privileges is a security best practice. Only grant users and service accounts the specific permissions required to perform the required tasks. Both Azure and Kubernetes provide a role-based access control model which can be used to grant these specific permissions. This article describes certain common scenarios in which the security of least privilege should be applied.
+
+> [!NOTE]
+> In this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout.
+> In this article, the `kubectl` CLI utility is used as the example. Any tool or system that uses the Kubernetes API can be used though.
+
+## Deploy the Azure Arc data controller
+
+Deploying the Azure Arc data controller requires some permissions which can be considered high privilege such as creating a Kubernetes namespace or creating cluster role. The following steps can be followed to separate the deployment of the data controller into multiple steps, each of which can be performed by a user or a service account which has the required permissions. This separation of duties ensures that each user or service account in the process has just the permissions required and nothing more.
+
+### Deploy a namespace in which the data controller will be created
+
+This step will create a new, dedicated Kubernetes namespace into which the Arc data controller will be deployed. It is essential to perform this step first, because the following steps will use this new namespace as a scope for the permissions that are being granted.
+
+Permissions required to perform this action:
+
+- Namespace
+ - Create
+ - Edit (if required for OpenShift clusters)
+
+Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created.
+
+```console
+kubectl create namespace arc
+```
+
+If you are using OpenShift, you will need to edit the `openshift.io/sa.scc.supplemental-groups` and `openshift.io/sa.scc.uid-range` annotations on the namespace using `kubectl edit namespace <name of namespace>`. Change these existing annotations to match these _specific_ UID and fsGroup IDs/ranges.
+
+```console
+openshift.io/sa.scc.supplemental-groups: 1000700001/10000
+openshift.io/sa.scc.uid-range: 1000700001/10000
+```
+
+## Assign permissions to the deploying service account and users/groups
+
+This step will create a service account and assign roles and cluster roles to the service account so that the service account can be used in a job to deploy the Arc data controller with the least privileges required.
+
+Permissions required to perform this action:
+
+- Service account
+ - Create
+- Role
+ - Create
+- Role binding
+ - Create
+- Cluster role
+ - Create
+- Cluster role binding
+ - Create
+- All the permissions being granted to the service account (see the arcdata-deployer.yaml below for details)
+
+Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
+
+```console
+kubectl apply --namespace arc -f arcdata-deployer.yaml
+```
+
+## Grant permissions to users to create the bootstrapper job and data controller
+
+Permissions required to perform this action:
+
+- Role
+ - Create
+- Role binding
+ - Create
+
+Save a copy of [arcdata-installer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arcdata-installer.yaml), and replace the placeholder `{{INSTALLER_USERNAME}}` in the file with the name of the user to grant the permissions to, for example: `john@contoso.com`. Add additional role binding subjects such as other users or groups as needed. Run the following command to create the installer permissions with the edited file.
+
+```console
+kubectl apply --namespace arc -f arcdata-installer.yaml
+```
+
+## Deploy the bootstrapper job
+
+Permissions required to perform this action:
+
+- User that is assigned to the arcdata-installer-role role in the previous step
+
+Run the following command to create the bootstrapper job that will run preparatory steps to deploy the data controller.
+
+```console
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml
+```
+
+## Create the Arc data controller
+
+Now you are ready to create the data controller itself.
+
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
+
+### Create the metrics and logs dashboards user names and passwords
+
+At the top of the file, you can specify a user name and password that is used to authenticate to the metrics and logs dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges.
+
+A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password.
+
+```consoole
+echo -n '<your string to encode here>' | base64
+# echo -n 'example' | base64
+```
+
+Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify SSL/TLS certificates during Kubernetes native tools deployment](monitor-certificates.md).
+
+### Edit the data controller configuration
+
+Edit the data controller configuration as needed:
+
+#### REQUIRED
+
+- `location`: Change this to be the Azure location where the _metadata_ about the data controller will be stored. Review the [list of available regions](overview.md#supported-regions).
+- `logsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the logs UI certificate.
+- `metricsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.
+
+#### Recommended: review and possibly change defaults
+
+Review these values, and update for your deployment:
+
+- `storage..className`: the storage class to use for the data controller data and log files. If you are unsure of the available storage classes in your Kubernetes cluster, you can run the following command: `kubectl get storageclass`. The default is default which assumes there is a storage class that exists and is named default not that there is a storage class that is the default. Note: There are two className settings to be set to the desired storage class - one for data and one for logs.
+- `serviceType`: Change the service type to NodePort if you are not using a LoadBalancer.
+- Security For Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, replace the security: settings with the following values in the data controller yaml file.
+
+ ```yml
+ security:
+ allowDumps: false
+ allowNodeMetricsCollection: false
+ allowPodMetricsCollection: false
+ ```
+
+#### Optional
+
+The following settings are optional.
+
+- `name`: The default name of the data controller is arc, but you can change it if you want.
+- `displayName`: Set this to the same value as the name attribute at the top of the file.
+- `registry`: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and pushing them to a private container registry, enter the IP address or DNS name of your registry here.
+- `dockerRegistry`: The secret to use to pull the images from a private container registry if required.
+- `repository`: The default repository on the Microsoft Container Registry is arcdata. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images.
+- `imageTag`: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version.
+- `logsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the logs UI certificate.
+- `metricsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.
+
+The following example shows a completed data controller yaml.
++
+Save the edited file on your local computer and run the following command to create the data controller:
+
+```console
+kubectl create --namespace arc -f <path to your data controller file>
+
+#Example
+kubectl create --namespace arc -f data-controller.yaml
+```
+
+### Monitoring the creation status
+
+Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
+
+```console
+kubectl get datacontroller --namespace arc
+```
+
+```console
+kubectl get pods --namespace arc
+```
+
+You can also check on the creation status or logs of any particular pod by running a command like below. This is especially useful for troubleshooting any issues.
+
+```console
+kubectl describe pod/<pod name> --namespace arc
+kubectl logs <pod name> --namespace arc
+
+#Example:
+#kubectl describe pod/control-2g7bl --namespace arc
+#kubectl logs control-2g7b1 --namespace arc
+```
+
+## Next steps
+
+You have several additional options for creating the Azure Arc data controller:
+
+> **Just want to try things out?**
+> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on AKS, Amazon EKS, or GKE, or in an Azure VM.
+>
+
+- [Create a data controller in direct connectivity mode with the Azure portal](create-data-controller-direct-prerequisites.md)
+- [Create a data controller in indirect connectivity mode with CLI](create-data-controller-indirect-cli.md)
+- [Create a data controller in indirect connectivity mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)
+- [Create a data controller in indirect connectivity mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md)
+- [Create a data controller in indirect connectivity mode with Kubernetes tools such as `kubectl` or `oc`](create-data-controller-using-kubernetes-native-tools.md)
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use Cluster Connect to connect to Azure Arc-enabled Kubernetes clusters"
+ Title: "Use the cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters"
Previously updated : 06/03/2022-
-description: "Use Cluster Connect to securely connect to Azure Arc-enabled Kubernetes clusters"
Last updated : 07/22/2022+
+description: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters"
-# Use Cluster Connect to connect to Azure Arc-enabled Kubernetes clusters
+# Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters
-With Cluster Connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall.
+With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall.
Access to the `apiserver` of the Azure Arc-enabled Kubernetes cluster enables the following scenarios:
A conceptual overview of this feature is available in [Cluster connect - Azure A
kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user ```
-1. Create a service account token by :
+1. Create a service account token by:
```console kubectl apply -f demo-user-secret.yaml
A conceptual overview of this feature is available in [Cluster connect - Azure A
## Access your cluster
-1. Set up the Cluster Connect based kubeconfig needed to access your cluster based on the authentication option used:
+1. Set up the cluster connect `kubeconfig` needed to access your cluster based on the authentication option used:
- - If using Azure Active Directory authentication option, after logging into Azure CLI using the Azure AD entity of interest, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):
+ - If using Azure AD authentication, after logging into Azure CLI using the Azure AD entity of interest, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):
```azurecli az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP ```
- - If using the service account authentication option, get the Cluster Connect `kubeconfig` needed to communicate with the cluster from anywhere:
+ - If using service account authentication, get the cluster connect `kubeconfig` needed to communicate with the cluster from anywhere:
```azurecli az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP --token $TOKEN
You should now see a response from the cluster containing the list of all pods u
## Known limitations
-When making requests to the Kubernetes cluster, if the Azure AD entity used is a part of more than 200 groups, the following error is observed as this is a known limitation:
+When making requests to the Kubernetes cluster, if the Azure AD entity used is a part of more than 200 groups, you may see the following error:
`You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.`
-To get past this error:
+This is a known limitation. To get past this error:
1. Create a [service principal](/cli/azure/create-an-azure-service-principal-azure-cli), which is less likely to be a member of more than 200 groups. 1. [Sign in](/cli/azure/create-an-azure-service-principal-azure-cli#sign-in-using-a-service-principal) to Azure CLI with the service principal before running the `az connectedk8s proxy` command.
azure-arc Conceptual Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-cluster-connect.md
Title: "Access Azure Arc-enabled Kubernetes cluster from anywhere using Cluster Connect"
+ Title: "Access Azure Arc-enabled Kubernetes clusters from anywhere using cluster connect"
Previously updated : 04/05/2021 Last updated : 07/22/2022
-description: "This article provides a conceptual overview of Cluster Connect capability of Azure Arc-enabled Kubernetes"
+description: "This article provides a conceptual overview of cluster connect capability of Azure Arc-enabled Kubernetes."
-# Access Azure Arc-enabled Kubernetes cluster from anywhere using Cluster Connect
+# Access Azure Arc-enabled Kubernetes clusters from anywhere using cluster connect
-The Azure Arc-enabled Kubernetes *cluster connect* feature provides connectivity to the `apiserver` of the cluster without requiring any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner.
+The Azure Arc-enabled Kubernetes *cluster connect* feature provides connectivity to the `apiserver` of the cluster without requiring any inbound port to be enabled on the firewall. A reverse proxy agent running on the cluster can securely start a session with the Azure Arc service in an outbound manner.
-Cluster connect allows developers to access their clusters from anywhere for interactive development and debugging. It also lets cluster users and administrators access or manage their clusters from anywhere. You can even use hosted agents/runners of Azure Pipelines, GitHub Actions, or any other hosted CI/CD service to deploy applications to on-prem clusters, without requiring self-hosted agents.
-
+Cluster connect allows developers to access their clusters from anywhere for interactive development and debugging. It also lets cluster users and administrators access or manage their clusters from anywhere. You can even use hosted agents/runners of Azure Pipelines, GitHub Actions, or any other hosted CI/CD service to deploy applications to on-premises clusters, without requiring self-hosted agents.
## Architecture [ ![Cluster connect architecture](./media/conceptual-cluster-connect.png) ](./media/conceptual-cluster-connect.png#lightbox)
-On the cluster side, a reverse proxy agent called `clusterconnect-agent` deployed as part of agent helm chart, makes outbound calls to Azure Arc service to establish the session.
+On the cluster side, a reverse proxy agent called `clusterconnect-agent` deployed as part of the agent Helm chart, makes outbound calls to the Azure Arc service to establish the session.
When the user calls `az connectedk8s proxy`:
-1. Azure Arc proxy binary is downloaded and spun up as a process on the client machine.
-1. Azure Arc proxy fetches a `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster on which the `az connectedk8s proxy` is invoked.
- * Azure Arc proxy uses the caller's Azure access token and the Azure Resource Manager ID name.
-1. The `kubeconfig` file, saved on the machine by Azure Arc proxy, points the server URL to an endpoint on the Azure Arc proxy process.
+
+1. The Azure Arc proxy binary is downloaded and spun up as a process on the client machine.
+1. The Azure Arc proxy fetches a `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster on which the `az connectedk8s proxy` is invoked.
+ * The Azure Arc proxy uses the caller's Azure access token and the Azure Resource Manager ID name.
+1. The `kubeconfig` file, saved on the machine by the Azure Arc proxy, points the server URL to an endpoint on the Azure Arc proxy process.
When a user sends a request using this `kubeconfig` file:
-1. Azure Arc proxy maps the endpoint receiving the request to the Azure Arc service.
-1. Azure Arc service then forwards the request to the `clusterconnect-agent` running on the cluster.
-1. The `clusterconnect-agent` passes on the request to the `kube-aad-proxy` component, which performs Azure AD authentication on the calling entity.
-1. After Azure AD authentication, `kube-aad-proxy` uses Kubernetes [user impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) feature to forward the request to the cluster's `apiserver`.
+
+1. The Azure Arc proxy maps the endpoint receiving the request to the Azure Arc service.
+1. The Azure Arc service then forwards the request to the `clusterconnect-agent` running on the cluster.
+1. The `clusterconnect-agent` passes on the request to the `kube-aad-proxy` component, which performs Azure Active Directory (Azure AD) authentication on the calling entity.
+1. After Azure AD authentication, `kube-aad-proxy` uses Kubernetes [user impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) to forward the request to the cluster's `apiserver`.
## Next steps * Use our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).
-* [Access your cluster](./cluster-connect.md) securely from anywhere using Cluster connect.
+* [Access your cluster](./cluster-connect.md) securely from anywhere using cluster connect.
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-custom-locations.md
Title: "Custom Locations - Azure Arc-enabled Kubernetes" Previously updated : 05/25/2021 Last updated : 07/21/2022
-description: "This article provides a conceptual overview of Custom Locations capability of Azure Arc-enabled Kubernetes"
+description: "This article provides a conceptual overview of the custom locations capability of Azure Arc-enabled Kubernetes"
# Custom locations on top of Azure Arc-enabled Kubernetes
-As an extension of the Azure location construct, *Custom Locations* provides a way for tenant administrators to use their Azure Arc-enabled Kubernetes clusters as target locations for deploying Azure services instances. Azure resources examples include Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL Hyperscale.
+As an extension of the Azure location construct, the *custom locations* feature provides a way for tenant administrators to use their Azure Arc-enabled Kubernetes clusters as target locations for deploying Azure services instances. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL Hyperscale.
-Similar to Azure locations, end users within the tenant with access to Custom Locations can deploy resources there using their company's private compute.
+Similar to Azure locations, end users within the tenant who have access to Custom Locations can deploy resources there using their company's private compute.
[ ![Arc platform layers](./media/conceptual-arc-platform-layers.png) ](./media/conceptual-arc-platform-layers.png#lightbox)
-You can visualize Custom Locations as an abstraction layer on top of Azure Arc-enabled Kubernetes cluster, cluster connect, and cluster extensions. Custom Locations creates the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster. These other Azure services require cluster access to manage resources the customer wants to deploy on their clusters.
-
+You can visualize custom locations as an abstraction layer on top of Azure Arc-enabled Kubernetes cluster, cluster connect, and cluster extensions. Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster. These other Azure services require cluster access to manage resources that the customer wants to deploy on their clusters.
## Architecture
-When the admin enables the Custom Locations feature on the cluster, a ClusterRoleBinding is created on the cluster, authorizing the Azure AD application used by the Custom Locations Resource Provider (RP). Once authorized, Custom Locations RP can create ClusterRoleBindings or RoleBindings needed by other Azure RPs to create custom resources on this cluster. The cluster extensions installed on the cluster determines the list of RPs to authorize.
+When the admin [enables the custom locations feature on the cluster](custom-locations.md), a ClusterRoleBinding is created on the cluster, authorizing the Azure AD application used by the custom locations resource provider. Once authorized, the custom locations resource provider can create ClusterRoleBindings or RoleBindings needed by other Azure resource providers to create custom resources on this cluster. The cluster extensions installed on the cluster determine the list of resource providers to authorize.
[ ![Use custom locations](./media/conceptual-custom-locations-usage.png) ](./media/conceptual-custom-locations-usage.png#lightbox)
-When the user creates a data service instance on the cluster:
+When the user creates a data service instance on the cluster:
+ 1. The PUT request is sent to Azure Resource Manager.
-1. The PUT request is forwarded to the Azure Arc-enabled Data Services RP.
-1. The RP fetches the `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster, on which the Custom Location exists.
- * Custom Location is referenced as `extendedLocation` in the original PUT request.
-1. Azure Arc-enabled Data Services RP uses the `kubeconfig` to communicate with the cluster to create a custom resource of the Azure Arc-enabled Data Services type on the namespace mapped to the Custom Location.
- * The Azure Arc-enabled Data Services operator was deployed via cluster extension creation before the Custom Location existed.
-1. The Azure Arc-enabled Data Services operator reads the new custom resource created on the cluster and creates the data controller, translating into realization of the desired state on the cluster.
+1. The PUT request is forwarded to the Azure Arc-enabled Data Services RP.
+1. The RP fetches the `kubeconfig` file associated with the Azure Arc-enabled Kubernetes cluster on which the custom location exists.
+ * Custom location is referenced as `extendedLocation` in the original PUT request.
+1. The Azure Arc-enabled Data Services resource provider uses the `kubeconfig` to communicate with the cluster to create a custom resource of the Azure Arc-enabled Data Services type on the namespace mapped to the custom location.
+ * The Azure Arc-enabled Data Services operator was deployed via cluster extension creation before the custom location existed.
+1. The Azure Arc-enabled Data Services operator reads the new custom resource created on the cluster and creates the data controller, translating into realization of the desired state on the cluster.
The sequence of steps to create the SQL managed instance and PostgreSQL instance are identical to the sequence of steps described above.
azure-arc Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/custom-locations.md
Title: "Create and manage custom locations on Azure Arc-enabled Kubernetes" Previously updated : 10/19/2021- Last updated : 07/21/2022+ description: "Use custom locations to deploy Azure PaaS services on Azure Arc-enabled Kubernetes clusters" # Create and manage custom locations on Azure Arc-enabled Kubernetes
- *Custom Locations* provides a way for tenant or cluster administrators to configure their Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. Examples of Azure offerings that can be deployed on top of custom locations include databases like Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL Hyperscale or application instances like App Services, Functions, Event Grid, Logic Apps, and API Management. A custom location has a one-to-one mapping to a namespace within the Azure Arc-enabled Kubernetes cluster. The custom location Azure resource combined with Azure RBAC can be used to grant application developers or database admins granular permissions to deploy different resources like databases or application instances on top of the Arc-enabled Kubernetes cluster in a multi-tenant manner.
-
-A conceptual overview of this feature is available in [Custom locations - Azure Arc-enabled Kubernetes](conceptual-custom-locations.md) article.
+ The *Custom locations* feature provides a way for tenant or cluster administrators to configure their Azure Arc-enabled Kubernetes clusters as target locations for deploying instances of Azure offerings. Examples of Azure offerings that can be deployed on top of custom locations include databases, such as Azure Arc-enabled SQL Managed Instance and Azure Arc-enabled PostgreSQL Hyperscale, or application instances, such as App Services, Functions, Event Grid, Logic Apps, and API Management.
+
+A custom location has a one-to-one mapping to a namespace within the Azure Arc-enabled Kubernetes cluster. The custom location Azure resource combined with Azure RBAC can be used to grant granular permissions to application developers or database admins, enabling them to deploy resources such as databases or application instances on top of Arc-enabled Kubernetes clusters in a multi-tenant manner.
+
+A conceptual overview of this feature is available in [Custom locations - Azure Arc-enabled Kubernetes](conceptual-custom-locations.md).
In this article, you learn how to: > [!div class="checklist"]
-> * Enable custom locations on your Azure Arc-enabled Kubernetes cluster.
-> * Create a custom location.
-
+> - Enable custom locations on your Azure Arc-enabled Kubernetes cluster.
+> - Create a custom location.
## Prerequisites - [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0. - Install the following Azure CLI extensions:
- - `connectedk8s` (version 1.2.0 or later)
- - `k8s-extension` (version 1.0.0 or later)
- - `customlocation` (version 0.1.3 or later)
+ - `connectedk8s` (version 1.2.0 or later)
+ - `k8s-extension` (version 1.0.0 or later)
+ - `customlocation` (version 0.1.3 or later)
```azurecli az extension add --name connectedk8s az extension add --name k8s-extension az extension add --name customlocation ```
-
+ If you have already installed the `connectedk8s`, `k8s-extension`, and `customlocation` extensions, update to the **latest version** using the following command: ```azurecli
In this article, you learn how to:
- Verify completed provider registration for `Microsoft.ExtendedLocation`. 1. Enter the following commands:
-
+ ```azurecli az provider register --namespace Microsoft.ExtendedLocation ``` 2. Monitor the registration process. Registration may take up to 10 minutes.
-
+ ```azurecli az provider show -n Microsoft.ExtendedLocation -o table ```
In this article, you learn how to:
Once registered, the `RegistrationState` state will have the `Registered` value. - Verify you have an existing [Azure Arc-enabled Kubernetes connected cluster](quickstart-connect-cluster.md).
- - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version 1.5.3 or later.
+ - [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to version 1.5.3 or later.
-## Enable custom locations on cluster
+## Enable custom locations on your cluster
-If you are logged into Azure CLI as an Azure AD user, to enable this feature on your cluster, execute the following command:
+If you are signed in to Azure CLI as an Azure AD user, to enable this feature on your cluster, execute the following command:
```azurecli az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features cluster-connect custom-locations ```
-If you run the above command while being logged into Azure CLI using a service principal, you may observe the following warning:
+If you run the above command while signed in to Azure CLI using a service principal, you may observe the following warning:
```console Unable to fetch oid of 'custom-locations' app. Proceeding without enabling the feature. Insufficient privileges to complete the operation. ```
-This is because a service principal doesn't have permissions to get information of the application used by Azure Arc service. To avoid this error, execute the following steps:
+This is because a service principal doesn't have permissions to get information of the application used by the Azure Arc service. To avoid this error, execute the following steps:
-1. Login into Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
+1. Sign in to Azure CLI using your user account. Fetch the Object ID of the Azure AD application used by Azure Arc service:
```azurecli az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv ```
-1. Login into Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
+1. Sign in to Azure CLI using the service principal. Use the `<objectId>` value from above step to enable custom locations feature on the cluster:
```azurecli az connectedk8s enable-features -n <cluster-name> -g <resource-group-name> --custom-locations-oid <objectId> --features cluster-connect custom-locations ``` > [!NOTE]
-> 1. Custom Locations feature is dependent on the Cluster Connect feature. So both features have to be enabled for custom locations to work.
-> 2. `az connectedk8s enable-features` needs to be run on a machine where the `kubeconfig` file is pointing to the cluster on which the features are to be enabled.
+> The custom locations feature is dependent on the [Cluster Connect](cluster-connect.md) feature. Both features have to be enabled for custom locations to work.
+>
+> `az connectedk8s enable-features` must be run on a machine where the `kubeconfig` file is pointing to the cluster on which the features are to be enabled.
## Create custom location 1. Deploy the Azure service cluster extension of the Azure service instance you want to install on your cluster:
- * [Azure Arc-enabled Data Services](../dat)
+ - [Azure Arc-enabled Data Services](../dat)
- > [!NOTE]
- > Outbound proxy without authentication and outbound proxy with basic authentication are supported by the Azure Arc-enabled Data Services cluster extension. Outbound proxy that expects trusted certificates is currently not supported.
+ > [!NOTE]
+ > Outbound proxy without authentication and outbound proxy with basic authentication are supported by the Azure Arc-enabled Data Services cluster extension. Outbound proxy that expects trusted certificates is currently not supported.
+ - [Azure App Service on Azure Arc](../../app-service/manage-create-arc-environment.md#install-the-app-service-extension)
- * [Azure App Service on Azure Arc](../../app-service/manage-create-arc-environment.md#install-the-app-service-extension)
+ - [Event Grid on Kubernetes](../../event-grid/kubernetes/install-k8s-extension.md)
- * [Event Grid on Kubernetes](../../event-grid/kubernetes/install-k8s-extension.md)
-
-2. Get the Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster, referenced in later steps as `connectedClusterId`:
+1. Get the Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster, referenced in later steps as `connectedClusterId`:
```azurecli az connectedk8s show -n <clusterName> -g <resourceGroupName> --query id -o tsv ```
-3. Get the Azure Resource Manager identifier of the cluster extension deployed on top of Azure Arc-enabled Kubernetes cluster, referenced in later steps as `extensionId`:
+1. Get the Azure Resource Manager identifier of the cluster extension deployed on top of Azure Arc-enabled Kubernetes cluster, referenced in later steps as `extensionId`:
```azurecli az k8s-extension show --name <extensionInstanceName> --cluster-type connectedClusters -c <clusterName> -g <resourceGroupName> --query id -o tsv ```
-4. Create custom location by referencing the Azure Arc-enabled Kubernetes cluster and the extension:
+1. Create the custom location by referencing the Azure Arc-enabled Kubernetes cluster and the extension:
```azurecli az customlocation create -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds> ```
-**Required parameters**
+ - Required parameters:
-| Parameter name | Description |
-|-||
-| `--name, --n` | Name of the custom location |
-| `--resource-group, --g` | Resource group of the custom location |
-| `--namespace` | Namespace in the cluster bound to the custom location being created |
-| `--host-resource-id` | Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster (connected cluster) |
-| `--cluster-extension-ids` | Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-separated list of the cluster extension IDs |
-
-**Optional parameters**
+ | Parameter name | Description |
+ |-||
+ | `--name, --n` | Name of the custom location |
+ | `--resource-group, --g` | Resource group of the custom location |
+ | `--namespace` | Namespace in the cluster bound to the custom location being created |
+ | `--host-resource-id` | Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster (connected cluster) |
+ | `--cluster-extension-ids` | Azure Resource Manager identifiers of the cluster extension instances installed on the connected cluster. Provide a space-separated list of the cluster extension IDs |
-| Parameter name | Description |
-|--||
-| `--location, --l` | Location of the custom location Azure Resource Manager resource in Azure. By default it will be set to the location of the connected cluster |
-| `--tags` | Space-separated list of tags: key[=value] [key[=value] ...]. Use '' to clear existing tags |
-| `--kubeconfig` | Admin `kubeconfig` of cluster |
+ - Optional parameters:
+ | Parameter name | Description |
+ |--||
+ | `--location, --l` | Location of the custom location Azure Resource Manager resource in Azure. By default it will be set to the location of the connected cluster |
+ | `--tags` | Space-separated list of tags: key[=value] [key[=value] ...]. Use '' to clear existing tags |
+ | `--kubeconfig` | Admin `kubeconfig` of cluster |
## Show details of a custom location
-Show details of a custom location
+To show the details of a custom location, use the following command:
```azurecli az customlocation show -n <customLocationName> -g <resourceGroupName> ```
-**Required parameters**
+Required parameters:
| Parameter name | Description | |-|| | `--name, --n` | Name of the custom location |
-| `--resource-group, --g` | Resource group of the custom location |
+| `--resource-group, --g` | Resource group of the custom location |
## List custom locations
-Lists all custom locations in a resource group
+To list all custom locations in a resource group, use the following command:
```azurecli az customlocation show -g <resourceGroupName> ```
-**Required parameters**
+Required parameters:
| Parameter name | Description | |-||
-| `--resource-group, --g` | Resource group of the custom location |
-
+| `--resource-group, --g` | Resource group of the custom location |
## Update a custom location
-Use `update` command when you want to add new tags, associate new cluster extension IDs to the custom location while retaining existing tags and associated cluster extensions. `--cluster-extension-ids`, `--tags`, `assign-identity` can be updated.
+Use the `update` command to add new tags or associate new cluster extension IDs to the custom location while retaining existing tags and associated cluster extensions. `--cluster-extension-ids`, `--tags`, `assign-identity` can be updated.
```azurecli az customlocation update -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds> ```
-**Required parameters**
+
+Required parameters:
| Parameter name | Description | |-||
az customlocation update -n <customLocationName> -g <resourceGroupName> --namesp
| `--namespace` | Namespace in the cluster bound to the custom location being created | | `--host-resource-id` | Azure Resource Manager identifier of the Azure Arc-enabled Kubernetes cluster (connected cluster) |
-**Optional parameters**
+Optional parameters:
| Parameter name | Description | |--||
az customlocation update -n <customLocationName> -g <resourceGroupName> --namesp
## Patch a custom location
-Use `patch` command when you want to replace existing tags, cluster extension IDs with new tags, cluster extension IDs. `--cluster-extension-ids`, `assign-identity`, `--tags` can be patched.
+Use the `patch` command to replace existing tags, cluster extension IDs with new tags, and cluster extension IDs. `--cluster-extension-ids`, `assign-identity`, `--tags` can be patched.
```azurecli az customlocation patch -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds> ```
-**Required parameters**
+Required parameters:
| Parameter name | Description | |-|| | `--name, --n` | Name of the custom location | | `--resource-group, --g` | Resource group of the custom location |
-**Optional parameters**
+Optional parameters:
| Parameter name | Description | |--||
az customlocation patch -n <customLocationName> -g <resourceGroupName> --namespa
## Delete a custom location
+To delete a custom location, use the following command:
+ ```azurecli az customlocation delete -n <customLocationName> -g <resourceGroupName> --namespace <name of namespace> --host-resource-id <connectedClusterId> --cluster-extension-ids <extensionIds> ```
az customlocation delete -n <customLocationName> -g <resourceGroupName> --namesp
## Next steps - Securely connect to the cluster using [Cluster Connect](cluster-connect.md).-- Continue with [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) for end-to-end instructions on installing extensions, creating custom locations, and creating the App Service Kubernetes environment. -- Create an event grid topic and an event subscription for [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md).
+- Continue with [Azure App Service on Azure Arc](../../app-service/overview-arc-integration.md) for end-to-end instructions on installing extensions, creating custom locations, and creating the App Service Kubernetes environment.
+- Create an Event Grid topic and an event subscription for [Event Grid on Kubernetes](../../event-grid/kubernetes/overview.md).
- Learn more about currently available [Azure Arc-enabled Kubernetes extensions](extensions.md#currently-available-extensions).
azure-arc Kubernetes Resource View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/kubernetes-resource-view.md
Title: Access Kubernetes resources from Azure portal Previously updated : 10/31/2021- Last updated : 07/22/2022+ description: Learn how to interact with Kubernetes resources to manage an Azure Arc-enabled Kubernetes cluster from the Azure portal. # Access Kubernetes resources from Azure portal
-The Azure portal includes a Kubernetes resource view for easy access to the Kubernetes resources in your Azure Arc-enabled Kubernetes cluster. Viewing Kubernetes resources from the Azure portal reduces context switching between the Azure portal and the `kubectl` command-line tool, streamlining the experience for viewing and editing your Kubernetes resources. The resource viewer currently includes multiple resource types, such as deployments, pods, and replica sets.
-
+The Azure portal includes a Kubernetes resource view for easy access to the Kubernetes resources in your Azure Arc-enabled Kubernetes cluster. Viewing Kubernetes resources from the Azure portal reduces context switching between the Azure portal and the `kubectl` command-line tool, streamlining the experience for viewing and editing your Kubernetes resources. The resource viewer currently includes multiple resource types, including deployments, pods, and replica sets.
## Prerequisites
The Azure portal includes a Kubernetes resource view for easy access to the Kube
## View Kubernetes resources
-To see the Kubernetes resources, navigate to your AKS cluster in the Azure portal. The navigation pane on the left is used to access your resources. The resources include:
+To see the Kubernetes resources, navigate to your cluster in the Azure portal. The navigation pane on the left is used to access your resources:
- **Namespaces** displays the namespaces of your cluster. The filter at the top of the namespace list provides a quick way to filter and display your namespace resources. - **Workloads** shows information about deployments, pods, replica sets, stateful sets, daemon sets, jobs, and cron jobs deployed to your cluster.
To see the Kubernetes resources, navigate to your AKS cluster in the Azure porta
The Kubernetes resource view also includes a YAML editor. A built-in YAML editor means you can update Kubernetes objects from within the portal and apply changes immediately.
-After editing the YAML, changes are applied by selecting **Review + save**, confirming the changes, and then saving again.
+After you edit the YAML, select **Review + save**, confirm the changes, and then save again.
[ ![YAML editor for Kubernetes objects displayed in the Azure portal](media/kubernetes-resource-view/yaml-editor.png) ](media/kubernetes-resource-view/yaml-editor.png#lightbox) >[!WARNING]
-> Performing direct production changes via UI or CLI is not recommended and you should consider using [Configurations (GitOps)](tutorial-use-gitops-connected-cluster.md) for production environments. The Azure portal Kubernetes management capabilities and the YAML editor are built for learning and flighting new deployments in a development and testing setting.
+> The Azure portal Kubernetes management capabilities and the YAML editor are built for learning and flighting new deployments in a development and testing setting. Performing direct production changes via UI or CLI is not recommended. For production environments, consider using [Configurations (GitOps)](tutorial-use-gitops-flux2.md).
## Next steps
-Azure Monitor for containers provides more in-depth information about nodes and containers of the cluster when compared to the logical view of the Kubernetes resources available with Kubernetes resources view described in this article. Learn how to [deploy Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json) on your cluster.
+Azure Monitor for containers provides more in-depth information about nodes and containers of the cluster when compared to the Kubernetes resource view described in this article. Learn how to [deploy Azure Monitor for containers](../../azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md?toc=/azure/azure-arc/kubernetes/toc.json) on your cluster.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
To deliver this experience, you need to deploy the [Azure Arc resource bridge](.
## Supported VMware vSphere versions
-Azure Arc-enabled VMware vSphere (preview) works with VMware vSphere version 6.7.
+Azure Arc-enabled VMware vSphere (preview) works with VMware vSphere version 6.7 and 7.
> [!NOTE] > Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 2500 VMs. If your vCenter has more than 2500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
First, the script deploys a virtual appliance called [Azure Arc resource bridge
### vCenter Server -- vCenter Server version 6.7.
+- vCenter Server version 6.7 or 7.
- A virtual network that can provide internet access, directly or through a proxy. It must also be possible for VMs on this network to communicate with the vCenter server on TCP port (usually 443).
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
# Install Log Analytics agent on Windows computers
-This article provides details on installing the Log Analytics agent on Windows computers using the following methods:
+This article provides information on how to install the Log Analytics agent on Windows computers by using the following methods:
-* Manual installation using the [setup wizard](#install-agent-using-setup-wizard) or [command line](#install-agent-using-command-line).
-* [Azure Automation Desired State Configuration (DSC)](#install-agent-using-dsc-in-azure-automation).
+* Manual installation using the [setup wizard](#install-agent-using-setup-wizard) or [command line](#install-agent-using-command-line)
+* [Azure Automation Desired State Configuration (DSC)](#install-agent-using-dsc-in-azure-automation)
-The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See [Installation options](./log-analytics-agent.md#installation-options) for more efficient options you can use for Azure virtual machines.
+The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. For more efficient options that you can use for Azure virtual machines, see [Installation options](./log-analytics-agent.md#installation-options).
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)] > [!NOTE]
-> Installing the Log Analytics agent will typically not require you to restart the machine.
+> Installing the Log Analytics agent typically won't require you to restart the machine.
## Supported operating systems
-See [Overview of Azure Monitor agents](agents-overview.md#supported-operating-systems) for a list of Windows versions supported by the Log Analytics agent.
+For a list of Windows versions supported by the Log Analytics agent, see [Overview of Azure Monitor agents](agents-overview.md#supported-operating-systems).
-### SHA-2 Code Signing Support Requirement
-The Windows agent will begin to exclusively use SHA-2 signing on August 17, 2020. This change will impact customers using the Log Analytics agent on a legacy OS as part of any Azure service (Azure Monitor, Azure Automation, Azure Update Management, Azure Change Tracking, Microsoft Defender for Cloud, Microsoft Sentinel, Windows Defender ATP). The change does not require any customer action unless you are running the agent on a legacy OS version (Windows 7, Windows Server 2008 R2 and Windows Server 2008). Customers running on a legacy OS version are required to take the following actions on their machines before August 17, 2020 or their agents will stop sending data to their Log Analytics workspaces:
+### SHA-2 code signing support requirement
+
+The Windows agent began to exclusively use SHA-2 signing on August 17, 2020. This change affected customers using the Log Analytics agent on a legacy OS as part of any Azure service, such as Azure Monitor, Azure Automation, Azure Update Management, Azure Change Tracking, Microsoft Defender for Cloud, Microsoft Sentinel, and Windows Defender Advanced Threat Protection.
+
+The change doesn't require any customer action unless you're running the agent on a legacy OS version, such as Windows 7, Windows Server 2008 R2, and Windows Server 2008. Customers running on a legacy OS version were required to take the following actions on their machines before August 17, 2020, or their agents stopped sending data to their Log Analytics workspaces:
+
+1. Install the latest service pack for your OS. The required service pack versions are:
-1. Install the latest Service Pack for your OS. The required service pack versions are:
- Windows 7 SP1 - Windows Server 2008 SP2 - Windows Server 2008 R2 SP1
-2. Install the SHA-2 signing Windows updates for your OS as described in [2019 SHA-2 Code Signing Support requirement for Windows and WSUS](https://support.microsoft.com/help/4472027/2019-sha-2-code-signing-support-requirement-for-windows-and-wsus)
-3. Update to the latest version of the Windows agent (version 10.20.18029).
-4. Recommended to configure the agent to [use TLS 1.2](agent-windows.md#configure-agent-to-use-tls-12).
+1. Install the SHA-2 signing Windows updates for your OS as described in [2019 SHA-2 code signing support requirement for Windows and WSUS](https://support.microsoft.com/help/4472027/2019-sha-2-code-signing-support-requirement-for-windows-and-wsus).
+1. Update to the latest version of the Windows agent (version 10.20.18029).
+1. We recommend that you configure the agent to [use TLS 1.2](agent-windows.md#configure-agent-to-use-tls-12).
## Network requirements
-See [Log Analytics agent overview](./log-analytics-agent.md#network-requirements) for the network requirements for the Windows agent.
+For the network requirements for the Windows agent, see [Log Analytics agent overview](./log-analytics-agent.md#network-requirements).
+
+## Configure agent to use TLS 1.2
-
-## Configure Agent to use TLS 1.2
-[TLS 1.2](/windows-server/security/tls/tls-registry-settings#tls-12) protocol ensures the security of data in transit for communication between the Windows agent and the Log Analytics service. If you're installing on an [operating system without TLS 1.2 enabled by default](../logs/data-security.md#sending-data-securely-using-tls-12), then you should configure TLS 1.2 using the steps below.
+[TLS 1.2](/windows-server/security/tls/tls-registry-settings#tls-12) protocol ensures the security of data in transit for communication between the Windows agent and the Log Analytics service. If you're installing on an [operating system without TLS 1.2 enabled by default](../logs/data-security.md#sending-data-securely-using-tls-12), configure TLS 1.2 by following these steps:
-1. Locate the following registry subkey: **HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols**
-2. Create a subkey under **Protocols** for TLS 1.2 **HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2**
-3. Create a **Client** subkey under the TLS 1.2 protocol version subkey you created earlier. For example, **HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client**.
-4. Create the following DWORD values under **HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client**:
+1. Locate the following registry subkey: **HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols**.
+1. Create a subkey under **Protocols** for TLS 1.2: **HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2**.
+1. Create a **Client** subkey under the TLS 1.2 protocol version subkey you created earlier. For example, **HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client**.
+1. Create the following DWORD values under **HKLM\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client**:
* **Enabled** [Value = 1] * **DisabledByDefault** [Value = 0]
-Configure .NET Framework 4.6 or later to support secure cryptography, as by default it is disabled. The [strong cryptography](/dotnet/framework/network-programming/tls#schusestrongcrypto) uses more secure network protocols like TLS 1.2, and blocks protocols that are not secure.
+Configure .NET Framework 4.6 or later to support secure cryptography because by default it's disabled. The [strong cryptography](/dotnet/framework/network-programming/tls#schusestrongcrypto) uses more secure network protocols like TLS 1.2 and blocks protocols that aren't secure.
-1. Locate the following registry subkey: **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\\.NETFramework\v4.0.30319**.
-2. Create the DWORD value **SchUseStrongCrypto** under this subkey with a value of **1**.
-3. Locate the following registry subkey: **HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\\.NETFramework\v4.0.30319**.
-4. Create the DWORD value **SchUseStrongCrypto** under this subkey with a value of **1**.
-5. Restart the system for the settings to take effect.
+1. Locate the following registry subkey: **HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\\.NETFramework\v4.0.30319**.
+1. Create the DWORD value **SchUseStrongCrypto** under this subkey with a value of **1**.
+1. Locate the following registry subkey: **HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\\.NETFramework\v4.0.30319**.
+1. Create the DWORD value **SchUseStrongCrypto** under this subkey with a value of **1**.
+1. Restart the system for the settings to take effect.
## Workspace ID and key
-Regardless of the installation method used, you will require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then select **Agents management** in the **Settings** section.
+Regardless of the installation method used, you'll require the workspace ID and key for the Log Analytics workspace that the agent will connect to. Select the workspace from the **Log Analytics workspaces** menu in the Azure portal. Then in the **Settings** section, select **Agents management**.
-[![Workspace details](media/log-analytics-agent/workspace-details.png)](media/log-analytics-agent/workspace-details.png#lightbox)
+[![Screenshot that shows workspace details.](media/log-analytics-agent/workspace-details.png)](media/log-analytics-agent/workspace-details.png#lightbox)
> [!NOTE]
-> You can't configure the agent to report to more than one workspace during initial setup. [Add or remove a workspace](agent-manage.md#adding-or-removing-a-workspace) afer installation by updating the settings from Control Panel or PowerShell.
+> You can't configure the agent to report to more than one workspace during initial setup. [Add or remove a workspace](agent-manage.md#adding-or-removing-a-workspace) after installation by updating the settings from Control Panel or PowerShell.
## Install agent using setup wizard
-The following steps install and configure the Log Analytics agent in Azure and Azure Government cloud by using the setup wizard for the agent on your computer. If you want to learn how to configure the agent to also report to a System Center Operations Manager management group, see [deploy the Operations Manager agent with the Agent Setup Wizard](/system-center/scom/manage-deploy-windows-agent-manually#to-deploy-the-operations-manager-agent-with-the-agent-setup-wizard).
-
-1. In your Log Analytics workspace, from the **Windows Servers** page you navigated to earlier, select the appropriate **Download Windows Agent** version to download depending on the processor architecture of the Windows operating system.
-2. Run Setup to install the agent on your computer.
-2. On the **Welcome** page, click **Next**.
-3. On the **License Terms** page, read the license and then click **I Agree**.
-4. On the **Destination Folder** page, change or keep the default installation folder and then click **Next**.
-5. On the **Agent Setup Options** page, choose to connect the agent to Azure Log Analytics and then click **Next**.
-6. On the **Azure Log Analytics** page, perform the following:
- 1. Paste the **Workspace ID** and **Workspace Key (Primary Key)** that you copied earlier. If the computer should report to a Log Analytics workspace in Azure Government cloud, select **Azure US Government** from the **Azure Cloud** drop-down list.
- 2. If the computer needs to communicate through a proxy server to the Log Analytics service, click **Advanced** and provide the URL and port number of the proxy server. If your proxy server requires authentication, type the username and password to authenticate with the proxy server and then click **Next**.
-7. Click **Next** once you have completed providing the necessary configuration settings.<br><br> ![paste Workspace ID and Primary Key](media/agent-windows/log-analytics-mma-setup-laworkspace.png)<br><br>
-8. On the **Ready to Install** page, review your choices and then click **Install**.
-9. On the **Configuration completed successfully** page, click **Finish**.
-
-When complete, the **Microsoft Monitoring Agent** appears in **Control Panel**. To confirm it is reporting to Log Analytics, review [Verify agent connectivity to Log Analytics](#verify-agent-connectivity-to-azure-monitor).
+
+The following steps install and configure the Log Analytics agent in Azure and Azure Government cloud by using the setup wizard for the agent on your computer. If you want to learn how to configure the agent to also report to a System Center Operations Manager management group, see [Deploy the Operations Manager agent with the Agent Setup Wizard](/system-center/scom/manage-deploy-windows-agent-manually#to-deploy-the-operations-manager-agent-with-the-agent-setup-wizard).
+
+1. In your Log Analytics workspace, from the **Windows Servers** page you navigated to earlier, select the appropriate **Download Windows Agent** version to download depending on the processor architecture of the Windows operating system.
+1. Run setup to install the agent on your computer.
+1. On the **Welcome** page, select **Next**.
+1. On the **License Terms** page, read the license and then select **I Agree**.
+1. On the **Destination Folder** page, change or keep the default installation folder and then select **Next**.
+1. On the **Agent Setup Options** page, choose to connect the agent to Azure Log Analytics and then select **Next**.
+1. On the **Azure Log Analytics** page:
+ 1. Paste the **Workspace ID** and **Workspace Key (Primary Key)** that you copied earlier. If the computer should report to a Log Analytics workspace in Azure Government cloud, select **Azure US Government** from the **Azure Cloud** dropdown list.
+ 1. If the computer needs to communicate through a proxy server to the Log Analytics service, select **Advanced** and provide the URL and port number of the proxy server. If your proxy server requires authentication, enter the username and password to authenticate with the proxy server and then select **Next**.
+1. Select **Next** after you've finished providing the necessary configuration settings.<br><br> ![Screenshot that shows pasting Workspace ID and Primary Key.](media/agent-windows/log-analytics-mma-setup-laworkspace.png)<br><br>
+1. On the **Ready to Install** page, review your choices and then select **Install**.
+1. On the **Configuration completed successfully** page, select **Finish**.
+
+When setup is finished, the **Microsoft Monitoring Agent** appears in **Control Panel**. To confirm it's reporting to Log Analytics, review [Verify agent connectivity to Log Analytics](#verify-agent-connectivity-to-azure-monitor).
## Install agent using command line
-The downloaded file for the agent is a self-contained installation package. The setup program for the agent and supporting files are contained in the package and need to be extracted in order to properly install using the command line shown in the following examples.
+
+The downloaded file for the agent is a self-contained installation package. The setup program for the agent and supporting files are contained in the package and need to be extracted to properly install by using the command line shown in the following examples.
>[!NOTE]
->If you want to upgrade an agent, you need to use the Log Analytics scripting API. See the topic [Managing and maintaining the Log Analytics agent for Windows and Linux](agent-manage.md) for further information.
+>If you want to upgrade an agent, you need to use the Log Analytics scripting API. For more information, see [Managing and maintaining the Log Analytics agent for Windows and Linux](agent-manage.md).
-The following table highlights the specific parameters supported by setup for the agent, including when deployed using Automation DSC.
+The following table highlights the specific parameters supported by setup for the agent, including when deployed by using Automation DSC.
|MMA-specific options |Notes | ||--| | NOAPM=1 | Optional parameter. Installs the agent without .NET Application Performance Monitoring.|
-|ADD_OPINSIGHTS_WORKSPACE | 1 = Configure the agent to report to a workspace |
-|OPINSIGHTS_WORKSPACE_ID | Workspace ID (guid) for the workspace to add |
-|OPINSIGHTS_WORKSPACE_KEY | Workspace key used to initially authenticate with the workspace |
-|OPINSIGHTS_WORKSPACE_AZURE_CLOUD_TYPE | Specify the cloud environment where the workspace is located <br> 0 = Azure commercial cloud (default) <br> 1 = Azure Government |
+|ADD_OPINSIGHTS_WORKSPACE | 1 = Configure the agent to report to a workspace. |
+|OPINSIGHTS_WORKSPACE_ID | Workspace ID (guid) for the workspace to add. |
+|OPINSIGHTS_WORKSPACE_KEY | Workspace key used to initially authenticate with the workspace. |
+|OPINSIGHTS_WORKSPACE_AZURE_CLOUD_TYPE | Specify the cloud environment where the workspace is located. <br> 0 = Azure commercial cloud (default). <br> 1 = Azure Government. |
|OPINSIGHTS_PROXY_URL | URI for the proxy to use. Example: OPINSIGHTS_PROXY_URL=IPAddress:Port or OPINSIGHTS_PROXY_URL=FQDN:Port |
-|OPINSIGHTS_PROXY_USERNAME | Username to access an authenticated proxy |
-|OPINSIGHTS_PROXY_PASSWORD | Password to access an authenticated proxy |
+|OPINSIGHTS_PROXY_USERNAME | Username to access an authenticated proxy. |
+|OPINSIGHTS_PROXY_PASSWORD | Password to access an authenticated proxy. |
+
+1. To extract the agent installation files, from an elevated command prompt, run `MMASetup-<platform>.exe /c`. You're prompted for the path to extract files to. Alternatively, you can specify the path by passing the arguments `MMASetup-<platform>.exe /c /t:<Full Path>`.
+1. To silently install the agent and configure it to report to a workspace in Azure commercial cloud, from the folder you extracted the setup files to, enter:
-1. To extract the agent installation files, from an elevated command prompt run `MMASetup-<platform>.exe /c` and it will prompt you for the path to extract files to. Alternatively, you can specify the path by passing the arguments `MMASetup-<platform>.exe /c /t:<Full Path>`.
-2. To silently install the agent and configure it to report to a workspace in Azure commercial cloud, from the folder you extracted the setup files to type:
-
```shell setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_AZURE_CLOUD_TYPE=0 OPINSIGHTS_WORKSPACE_ID="<your workspace ID>" OPINSIGHTS_WORKSPACE_KEY="<your workspace key>" AcceptEndUserLicenseAgreement=1 ```
- or to configure the agent to report to Azure US Government cloud, type:
+ Or to configure the agent to report to Azure US Government cloud, enter:
```shell setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_AZURE_CLOUD_TYPE=1 OPINSIGHTS_WORKSPACE_ID="<your workspace ID>" OPINSIGHTS_WORKSPACE_KEY="<your workspace key>" AcceptEndUserLicenseAgreement=1 ```+ >[!NOTE]
- >The string values for the parameters *OPINSIGHTS_WORKSPACE_ID* and *OPINSIGHTS_WORKSPACE_KEY* need to be encapsulated in double-quotes to instruct Windows Installer to interprit as valid options for the package.
+ >The string values for the parameters *OPINSIGHTS_WORKSPACE_ID* and *OPINSIGHTS_WORKSPACE_KEY* need to be enclosed in double quotation marks to instruct Windows Installer to interpret as valid options for the package.
## Install agent using DSC in Azure Automation
-You can use the following script example to install the agent using Azure Automation DSC. If you do not have an Automation account, see [Get started with Azure Automation](../../automation/index.yml) to understand requirements and steps for creating an Automation account required before using Automation DSC. If you are not familiar with Automation DSC, review [Getting started with Automation DSC](../../automation/automation-dsc-getting-started.md).
+You can use the following script example to install the agent by using Azure Automation DSC. If you don't have an Automation account, see [Get started with Azure Automation](../../automation/index.yml) to understand requirements and steps for creating an Automation account required before you use Automation DSC. If you aren't familiar with Automation DSC, see [Getting started with Automation DSC](../../automation/automation-dsc-getting-started.md).
The following example installs the 64-bit agent, identified by the `URI` value. You can also use the 32-bit version by replacing the URI value. The URIs for both versions are: -- Windows 64-bit agent - https://go.microsoft.com/fwlink/?LinkId=828603-- Windows 32-bit agent - https://go.microsoft.com/fwlink/?LinkId=828604-
+- **Windows 64-bit agent:** https://go.microsoft.com/fwlink/?LinkId=828603
+- **Windows 32-bit agent:** https://go.microsoft.com/fwlink/?LinkId=828604
>[!NOTE]
->This procedure and script example does not support upgrading the agent already deployed to a Windows computer.
-
-The 32-bit and 64-bit versions of the agent package have different product codes and new versions released also have a unique value. The product code is a GUID that is the principal identification of an application or product and is represented by the Windows Installer **ProductCode** property. The `ProductId` value in the **MMAgent.ps1** script has to match the product code from the 32-bit or 64-bit agent installer package.
-
-To retrieve the product code from the agent install package directly, you can use Orca.exe from the [Windows SDK Components for Windows Installer Developers](/windows/win32/msi/platform-sdk-components-for-windows-installer-developers) that is a component of the Windows Software Development Kit or using PowerShell following an [example script](https://www.scconfigmgr.com/2014/08/22/how-to-get-msi-file-information-with-powershell/) written by a Microsoft Valuable Professional (MVP). For either approach, you first need to extract the **MOMagent.msi** file from the MMASetup installation package. This is shown earlier in the first step under the section [Install the agent using the command line](#install-agent-using-command-line).
-
-1. Import the xPSDesiredStateConfiguration DSC Module from [https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration](https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration) into Azure Automation.
-2. Create Azure Automation variable assets for *OPSINSIGHTS_WS_ID* and *OPSINSIGHTS_WS_KEY*. Set *OPSINSIGHTS_WS_ID* to your Log Analytics workspace ID and set *OPSINSIGHTS_WS_KEY* to the primary key of your workspace.
-3. Copy the script and save it as MMAgent.ps1.
-
-```powershell
-Configuration MMAgent
-{
- $OIPackageLocalPath = "C:\Deploy\MMASetup-AMD64.exe"
- $OPSINSIGHTS_WS_ID = Get-AutomationVariable -Name "OPSINSIGHTS_WS_ID"
- $OPSINSIGHTS_WS_KEY = Get-AutomationVariable -Name "OPSINSIGHTS_WS_KEY"
-
- Import-DscResource -ModuleName xPSDesiredStateConfiguration
- Import-DscResource -ModuleName PSDesiredStateConfiguration
-
- Node OMSnode {
- Service OIService
- {
- Name = "HealthService"
- State = "Running"
- DependsOn = "[Package]OI"
- }
-
- xRemoteFile OIPackage {
- Uri = "https://go.microsoft.com/fwlink/?LinkId=828603"
- DestinationPath = $OIPackageLocalPath
- }
-
- Package OI {
- Ensure = "Present"
- Path = $OIPackageLocalPath
- Name = "Microsoft Monitoring Agent"
- ProductId = "8A7F2C51-4C7D-4BFD-9014-91D11F24AAE2"
- Arguments = '/C:"setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_ID=' + $OPSINSIGHTS_WS_ID + ' OPINSIGHTS_WORKSPACE_KEY=' + $OPSINSIGHTS_WS_KEY + ' AcceptEndUserLicenseAgreement=1"'
- DependsOn = "[xRemoteFile]OIPackage"
+>This procedure and script example doesn't support upgrading the agent already deployed to a Windows computer.
+
+The 32-bit and 64-bit versions of the agent package have different product codes, and new versions released also have a unique value. The product code is a GUID that's the principal identification of an application or product and is represented by the Windows Installer **ProductCode** property. The `ProductId` value in the **MMAgent.ps1** script has to match the product code from the 32-bit or 64-bit agent installer package.
+
+To retrieve the product code from the agent installer package directly, you can use Orca.exe from the [Windows SDK Components for Windows Installer Developers](/windows/win32/msi/platform-sdk-components-for-windows-installer-developers) that's a component of the Windows Software Development Kit. Or you can use PowerShell by following an [example script](https://www.scconfigmgr.com/2014/08/22/how-to-get-msi-file-information-with-powershell/) written by a Microsoft Valuable Professional. For either approach, you first need to extract the **MOMagent.msi** file from the MMASetup installation package. For instructions, see the first step in the section [Install agent using command line](#install-agent-using-command-line).
+
+1. Import the xPSDesiredStateConfiguration DSC Module from [https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration](https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration) into Azure Automation.
+1. Create Azure Automation variable assets for *OPSINSIGHTS_WS_ID* and *OPSINSIGHTS_WS_KEY*. Set *OPSINSIGHTS_WS_ID* to your Log Analytics workspace ID. Set *OPSINSIGHTS_WS_KEY* to the primary key of your workspace.
+1. Copy the script and save it as **MMAgent.ps1**.
+
+ ```powershell
+ Configuration MMAgent
+ {
+ $OIPackageLocalPath = "C:\Deploy\MMASetup-AMD64.exe"
+ $OPSINSIGHTS_WS_ID = Get-AutomationVariable -Name "OPSINSIGHTS_WS_ID"
+ $OPSINSIGHTS_WS_KEY = Get-AutomationVariable -Name "OPSINSIGHTS_WS_KEY"
+
+ Import-DscResource -ModuleName xPSDesiredStateConfiguration
+ Import-DscResource -ModuleName PSDesiredStateConfiguration
+
+ Node OMSnode {
+ Service OIService
+ {
+ Name = "HealthService"
+ State = "Running"
+ DependsOn = "[Package]OI"
+ }
+
+ xRemoteFile OIPackage {
+ Uri = "https://go.microsoft.com/fwlink/?LinkId=828603"
+ DestinationPath = $OIPackageLocalPath
+ }
+
+ Package OI {
+ Ensure = "Present"
+ Path = $OIPackageLocalPath
+ Name = "Microsoft Monitoring Agent"
+ ProductId = "8A7F2C51-4C7D-4BFD-9014-91D11F24AAE2"
+ Arguments = '/C:"setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_ID=' + $OPSINSIGHTS_WS_ID + ' OPINSIGHTS_WORKSPACE_KEY=' + $OPSINSIGHTS_WS_KEY + ' AcceptEndUserLicenseAgreement=1"'
+ DependsOn = "[xRemoteFile]OIPackage"
+ }
} }
-}
-
-```
+
+ ```
-4. Update the `ProductId` value in the script with the product code extracted from the latest version of the agent install package using the methods recommended earlier.
-5. [Import the MMAgent.ps1 configuration script](../../automation/automation-dsc-getting-started.md#import-a-configuration-into-azure-automation) into your Automation account.
-6. [Assign a Windows computer or node](../../automation/automation-dsc-getting-started.md#enable-an-azure-resource-manager-vm-for-management-with-state-configuration) to the configuration. Within 15 minutes, the node checks its configuration and the agent is pushed to the node.
+1. Update the `ProductId` value in the script with the product code extracted from the latest version of the agent installation package by using the methods recommended earlier.
+1. [Import the MMAgent.ps1 configuration script](../../automation/automation-dsc-getting-started.md#import-a-configuration-into-azure-automation) into your Automation account.
+1. [Assign a Windows computer or node](../../automation/automation-dsc-getting-started.md#enable-an-azure-resource-manager-vm-for-management-with-state-configuration) to the configuration. Within 15 minutes, the node checks its configuration and the agent is pushed to the node.
## Verify agent connectivity to Azure Monitor
-Once installation of the agent is complete, verifying it is successfully connected and reporting can be accomplished in two ways.
+After installation of the agent is finished, you can verify that it's successfully connected and reporting in two ways.
-From the computer in **Control Panel**, find the item **Microsoft Monitoring Agent**. Select it and on the **Azure Log Analytics** tab, the agent should display a message stating: **The Microsoft Monitoring Agent has successfully connected to the Microsoft Operations Management Suite service.**<br><br> ![MMA connection status to Log Analytics](media/agent-windows/log-analytics-mma-laworkspace-status.png)
+From the computer in **Control Panel**, find the item **Microsoft Monitoring Agent**. Select it, and on the **Azure Log Analytics** tab, the agent should display a message stating *The Microsoft Monitoring Agent has successfully connected to the Microsoft Operations Management Suite service.*<br><br> ![Screenshot that shows the MMA connection status to Log Analytics message.](media/agent-windows/log-analytics-mma-laworkspace-status.png)
-You can also perform a simple log query in the Azure portal.
+You can also perform a log query in the Azure portal:
1. In the Azure portal, search for and select **Monitor**.
-1. Select **Logs** in the menu.
-1. On the **Logs** pane, in the query field type:
+1. Select **Logs** on the menu.
+1. On the **Logs** pane, in the query field, enter:
``` Heartbeat
You can also perform a simple log query in the Azure portal.
| where TimeGenerated > ago(30m) ```
-In the search results returned, you should see heartbeat records for the computer indicating it is connected and reporting to the service.
+In the search results that are returned, you should see heartbeat records for the computer that indicate it's connected and reporting to the service.
## Cache information Data from the Log Analytics agent is cached on the local machine at *C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State* before it's sent to Azure Monitor. The agent attempts to upload every 20 seconds. If it fails, it will wait an exponentially increasing length of time until it succeeds. It will wait 30 seconds before the second attempt, 60 seconds before the next, 120 seconds, and so on to a maximum of 8.5 hours between retries until it successfully connects again. This wait time is slightly randomized to avoid all agents simultaneously attempting connection. Oldest data is discarded when the maximum buffer is reached.
-The default cache size is 50 MB but can be configured between a minimum of 5 MB and maximum of 1.5 GB. It's stored in the registry key *HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HealthService\Parameters\Persistence Cache Maximum*. The value represents the number of pages, with 8 KB per page.
-
+The default cache size is 50 MB, but it can be configured between a minimum of 5 MB and maximum of 1.5 GB. It's stored in the registry key *HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HealthService\Parameters\Persistence Cache Maximum*. The value represents the number of pages, with 8 KB per page.
## Next steps - Review [Managing and maintaining the Log Analytics agent for Windows and Linux](agent-manage.md) to learn about how to reconfigure, upgrade, or remove the agent from the virtual machine.--- Review [Troubleshooting the Windows agent](agent-windows-troubleshoot.md) if you encounter issues while installing or managing the agent.
+- Review [Troubleshooting the Windows agent](agent-windows-troubleshoot.md) if you encounter issues while you install or manage the agent.
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Title: Overview of the Azure monitoring agents| Microsoft Docs
-description: This article provides a detailed overview of the Azure agents available which support monitoring virtual machines hosted in Azure or hybrid environment.
+description: This article provides a detailed overview of the Azure agents that are available and support monitoring virtual machines hosted in an Azure or hybrid environment.
# Overview of Azure Monitor agents
-Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. There are many legacy agents that exist today for this purpose, that will all be eventually replaced by the new consolidated [Azure Monitor agent](./azure-monitor-agent-overview.md). This article describes both the legacy agents as well as the new Azure Monitor agent.
+Virtual machines and other compute resources require an agent to collect monitoring data required to measure the performance and availability of their guest operating system and workloads. Many legacy agents exist today for this purpose. Eventually, they'll all be replaced by the new consolidated [Azure Monitor agent](./azure-monitor-agent-overview.md). This article describes the legacy agents and the new Azure Monitor agent.
-The general recommendation is to use the Azure Monitor agent if you are not bound by [these limitations](./azure-monitor-agent-overview.md#current-limitations), as it consolidates the features of all the legacy agents listed below and provides these [additional benefits](#azure-monitor-agent).
-If you do require the limitations today, you may continue using the other legacy agents listed below until **August 2024**. [Learn more](./azure-monitor-agent-overview.md)
+The general recommendation is to use the Azure Monitor agent if you aren't bound by [these limitations](./azure-monitor-agent-overview.md#current-limitations) because it consolidates the features of all the legacy agents listed here and provides [other benefits](#azure-monitor-agent).
+If you do require the limitations today, you may continue to use the other legacy agents listed here until **August 2024**. [Learn more](./azure-monitor-agent-overview.md).
## Summary of agents
-The following tables provide a quick comparison of the telemetry agents for Windows and Linux. Further detail on each is provided in the section below.
+The following tables provide a quick comparison of the telemetry agents for Windows and Linux. More information on each agent is provided in the following sections.
### Windows agents | | Azure Monitor agent | Diagnostics<br>extension (WAD) | Log Analytics<br>agent | Dependency<br>agent | |:|:-|:|:|:|
-| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc)<br>[Windows Client OS (preview)](./azure-monitor-agent-windows-client.md) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
+| **Environments supported** | Azure<br><br>Other cloud (Azure Arc)<br><br>On-premises (Azure Arc)<br><br>[Windows Client OS (preview)](./azure-monitor-agent-windows-client.md) | Azure | Azure<br><br>Other cloud<br><br>On-premises | Azure<br><br>Other cloud<br><br>On-premises |
| **Agent requirements** | None | None | None | Requires Log Analytics agent |
-| **Data collected** | Event Logs<br>Performance<br>File based logs (preview)<br> | Event Logs<br>ETW events<br>Performance<br>File based logs<br>IIS logs<br>.NET app logs<br>Crash dumps<br>Agent diagnostics logs | Event Logs<br>Performance<br>File based logs<br>IIS logs<br>Insights and solutions<br>Other services | Process dependencies<br>Network connection metrics |
-| **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br>Azure Monitor Metrics<br>Event Hub | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
-| **Services and**<br>**features**<br>**supported** | Log Analytics<br>Metrics explorer<br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | Metrics explorer | VM insights<br>Log Analytics<br>Azure Automation<br>Microsoft Defender for Cloud<br>Microsoft Sentinel | VM insights<br>Service Map |
+| **Data collected** | Event logs<br><br>Performance<br><br>File-based logs (preview)<br> | Event logs<br><br>ETW events<br><br>Performance<br><br>File-based logs<br><br>IIS logs<br><br>.NET app logs<br><br>Crash dumps<br><br>Agent diagnostics logs | Event logs<br><br>Performance<br><br>File-based logs<br><br>IIS logs<br><br>Insights and solutions<br><br>Other services | Process dependencies<br><br>Network connection metrics |
+| **Data sent to** | Azure Monitor Logs<br><br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br><br>Azure Monitor Metrics<br><br>Event hub | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
+| **Services and**<br>**features**<br>**supported** | Log Analytics<br><br>Metrics Explorer<br><br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | Metrics Explorer | VM insights<br><br>Log Analytics<br><br>Azure Automation<br><br>Microsoft Defender for Cloud<br><br>Microsoft Sentinel | VM insights<br><br>Service Map |
### Linux agents | | Azure Monitor agent | Diagnostics<br>extension (LAD) | Telegraf<br>agent | Log Analytics<br>agent | Dependency<br>agent | |:|:-|:|:|:|:|
-| **Environments supported** | Azure<br>Other cloud (Azure Arc)<br>On-premises (Azure Arc) | Azure | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises | Azure<br>Other cloud<br>On-premises |
+| **Environments supported** | Azure<br><br>Other cloud (Azure Arc)<br><br>On-premises (Azure Arc) | Azure | Azure<br><br>Other cloud<br><br>On-premises | Azure<br><br>Other cloud<br><br>On-premises | Azure<br><br>Other cloud<br><br>On-premises |
| **Agent requirements** | None | None | None | None | Requires Log Analytics agent |
-| **Data collected** | Syslog<br>Performance<br>File based logs (preview)<br> | Syslog<br>Performance | Performance | Syslog<br>Performance| Process dependencies<br>Network connection metrics |
-| **Data sent to** | Azure Monitor Logs<br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br>Event Hub | Azure Monitor Metrics | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
-| **Services and**<br>**features**<br>**supported** | Log Analytics<br>Metrics explorer<br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | | Metrics explorer | VM insights<br>Log Analytics<br>Azure Automation<br>Microsoft Defender for Cloud<br>Microsoft Sentinel | VM insights<br>Service Map |
+| **Data collected** | Syslog<br><br>Performance<br><br>File-based logs (preview)<br> | Syslog<br><br>Performance | Performance | Syslog<br><br>Performance| Process dependencies<br><br>Network connection metrics |
+| **Data sent to** | Azure Monitor Logs<br><br>Azure Monitor Metrics<sup>1</sup> | Azure Storage<br><br>Event hub | Azure Monitor Metrics | Azure Monitor Logs | Azure Monitor Logs<br>(through Log Analytics agent) |
+| **Services and**<br>**features**<br>**supported** | Log Analytics<br><br>Metrics Explorer<br><br>Microsoft Sentinel ([view scope](./azure-monitor-agent-overview.md#supported-services-and-features)) | | Metrics Explorer | VM insights<br><br>Log Analytics<br><br>Azure Automation<br><br>Microsoft Defender for Cloud<br><br>Microsoft Sentinel | VM insights<br><br>Service Map |
-<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
+<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
## Azure Monitor agent
-The [Azure Monitor agent](azure-monitor-agent-overview.md) is meant to replace the Log Analytics agent, Azure Diagnostic extension and Telegraf agent for both Windows and Linux machines. It can send data to both Azure Monitor Logs and Azure Monitor Metrics and uses [Data Collection Rules (DCR)](../essentials/data-collection-rule-overview.md) which provide a more scalable method of configuring data collection and destinations for each agent.
+The [Azure Monitor agent](azure-monitor-agent-overview.md) is meant to replace the Log Analytics agent, Azure Diagnostics extension, and Telegraf agent for Windows and Linux machines. It can send data to Azure Monitor Logs and Azure Monitor Metrics and uses [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md). DCRs provide a more scalable method of configuring data collection and destinations for each agent.
Use the Azure Monitor agent to gain these benefits: -- Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises. ([Azure Arc-enabled servers](../../azure-arc/servers/overview.md) required for machines outside of Azure.) -- **Cost savings:**
- - Granular targeting via [Data Collection Rules](../essentials/data-collection-rule-overview.md) to collect specific data types from specific machines, as compared to the "all or nothing" mode that Log Analytics agent supports
- - Use XPath queries to filter Windows events that get collected. This helps further reduce ingestion and storage costs.
-- **Centrally configure** collection for different sets of data from different sets of VMs.-- **Simplified management of data collection:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](./azure-monitor-agent-overview.md#data-sources-and-destinations). Additionally, every action across the data collection lifecycle, from onboarding to deployment to updates, is significantly easier, scalable, and centralized (in Azure) using data collection rules-- **Management of dependent solutions or -- **Security and performance** - For authentication and security, it uses Managed Identity (for virtual machines) and AAD device tokens (for clients) which are both much more secure and ΓÇÿhack proofΓÇÖ than certificates or workspace keys that legacy agents use. This agent performs better at higher EPS (events per second upload rate) compared to legacy agents.-- Manage data collection configuration centrally, using [data collection rules](../essentials/data-collection-rule-overview.md) and use Azure Resource Manager (ARM) templates or policies for management overall.-- Send data to Azure Monitor Logs and Azure Monitor Metrics (preview) for analysis with Azure Monitor. -- Use Windows event filtering or multi-homing for logs on Windows and Linux.
+- Collect guest logs and metrics from any machine in Azure, in other clouds, or on-premises. ([Azure Arc-enabled servers](../../azure-arc/servers/overview.md) are required for machines outside of Azure.)
+- Collect specific data types from specific machines with granular targeting via [DCRs](../essentials/data-collection-rule-overview.md) as compared to the "all or nothing" mode that the Log Analytics agent supports.
+- Use XPath queries to filter Windows events that get collected, which helps to further reduce ingestion and storage costs.
+- Centrally configure collection for different sets of data from different sets of VMs.
+- Simplify management of data collection. Send data from Windows and Linux VMs to multiple Log Analytics workspaces (for example, "multihoming") and/or other [supported destinations](./azure-monitor-agent-overview.md#data-sources-and-destinations). Every action across the data collection lifecycle, from onboarding to deployment to updates, is easier, scalable, and centralized (in Azure) by using DCRs.
+- Manage dependent solutions or services. The Azure Monitor agent uses a new method of handling extensibility that's more transparent and controllable than management packs and Linux plug-ins in the legacy Log Analytics agents. This management experience is identical for machines in Azure or on-premises/other clouds via Azure Arc, at no added cost.
+- Use Managed Identity (for virtual machines) and Azure Active Directory device tokens (for clients), which are much more secure and "hack proof" than certificates or workspace keys that legacy agents use. This agent performs better at higher events-per-second upload rates compared to legacy agents.
+- Manage data collection configuration centrally by using [DCRs](../essentials/data-collection-rule-overview.md), and use Azure Resource Manager templates or policies for management overall.
+- Send data to Azure Monitor Logs and Azure Monitor Metrics (preview) for analysis with Azure Monitor.
+- Use Windows event filtering or multihoming for logs on Windows and Linux.
<! Send data to Azure Storage for archiving.-- Send data to third-party tools using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).-- Manage the security of your machines using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md). (Available in private preview.)-- Use [VM insights](../vm/vminsights-overview.md) which allows you to monitor your machines at scale and monitors their processes and dependencies on other resources and external processes.. -- Manage the security of your machines using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).
+- Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).
+- Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md). (Available in private preview.)
+- Use [VM insights](../vm/vminsights-overview.md), which allows you to monitor your machines at scale and monitors their processes and dependencies on other resources and external processes.
+- Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).
- Use different [solutions](../monitor-reference.md#insights-and-curated-visualizations) to monitor a particular service or application. */ -->
-When compared with the legacy agents, the Azure Monitor Agent has [these limitations currently](./azure-monitor-agent-overview.md#current-limitations).
+When compared with the legacy agents, the Azure Monitor agent has [these limitations currently](./azure-monitor-agent-overview.md#current-limitations).
## Log Analytics agent > [!WARNING] > The Log Analytics agents are on a deprecation path and will no longer be supported after August 31, 2024.
-The legacy [Log Analytics agent](./log-analytics-agent.md) collects monitoring data from the guest operating system and workloads of virtual machines in Azure, other cloud providers, and on-premises machines. It sends data to a Log Analytics workspace. The Log Analytics agent is the same agent used by System Center Operations Manager, and you can multihome agent computers to communicate with your management group and Azure Monitor simultaneously. This agent is also required by certain insights in Azure Monitor and other services in Azure.
+The legacy [Log Analytics agent](./log-analytics-agent.md) collects monitoring data from the guest operating system and workloads of virtual machines in Azure, other cloud providers, and on-premises machines. It sends data to a Log Analytics workspace. The Log Analytics agent is the same agent used by System Center Operations Manager. You can multihome agent computers to communicate with your management group and Azure Monitor simultaneously. This agent is also required by certain insights in Azure Monitor and other services in Azure.
> [!NOTE] > The Log Analytics agent for Windows is often referred to as Microsoft Monitoring Agent (MMA). The Log Analytics agent for Linux is often referred to as OMS agent.
The legacy [Log Analytics agent](./log-analytics-agent.md) collects monitoring d
Use the Log Analytics agent if you need to: * Collect logs and performance data from Azure virtual machines or hybrid machines hosted outside of Azure.
-* Send data to a Log Analytics workspace to take advantage of features supported by [Azure Monitor Logs](../logs/data-platform-logs.md) such as [log queries](../logs/log-query-overview.md).
-* Use [VM insights](../vm/vminsights-overview.md) which allows you to monitor your machines at scale and monitors their processes and dependencies on other resources and external processes..
-* Manage the security of your machines using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).
+* Send data to a Log Analytics workspace to take advantage of features supported by [Azure Monitor Logs](../logs/data-platform-logs.md), such as [log queries](../logs/log-query-overview.md).
+* Use [VM insights](../vm/vminsights-overview.md), which allows you to monitor your machines at scale and monitor their processes and dependencies on other resources and external processes.
+* Manage the security of your machines by using [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) or [Microsoft Sentinel](../../sentinel/overview.md).
* Use [Azure Automation Update Management](../../automation/update-management/overview.md), [Azure Automation State Configuration](../../automation/automation-dsc-overview.md), or [Azure Automation Change Tracking and Inventory](../../automation/change-tracking/overview.md) to deliver comprehensive management of your Azure and non-Azure machines. * Use different [solutions](../monitor-reference.md#insights-and-curated-visualizations) to monitor a particular service or application.
-Limitations of the Log Analytics agent include:
+Limitations of the Log Analytics agent:
-- Cannot send data to Azure Monitor Metrics, Azure Storage, or Azure Event Hubs.-- Difficult to configure unique monitoring definitions for individual agents.-- Difficult to manage at scale since each virtual machine has a unique configuration.
+- Can't send data to Azure Monitor Metrics, Azure Storage, or Azure Event Hubs
+- Difficult to configure unique monitoring definitions for individual agents
+- Difficult to manage at scale because each virtual machine has a unique configuration
-## Azure diagnostics extension
+## Azure Diagnostics extension
-The [Azure Diagnostics extension](./diagnostics-extension-overview.md) collects monitoring data from the guest operating system and workloads of Azure virtual machines and other compute resources. It primarily collects data into Azure Storage but also allows you to define data sinks to also send data to other destinations such as Azure Monitor Metrics and Azure Event Hubs.
+The [Azure Diagnostics extension](./diagnostics-extension-overview.md) collects monitoring data from the guest operating system and workloads of Azure virtual machines and other compute resources. It primarily collects data into Azure Storage. It also allows you to define data sinks to send data to other destinations, such as Azure Monitor Metrics and Azure Event Hubs.
-Use Azure diagnostic extension if you need to:
+Use the Azure Diagnostics extension if you need to:
- Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md).-- Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [metrics explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).-- Send data to third-party tools using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).
+- Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [Metrics Explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).
+- Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).
- Collect [Boot Diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
-Limitations of Azure diagnostics extension include:
+Limitations of the Azure Diagnostics extension:
-- Can only be used with Azure resources.-- Limited ability to send data to Azure Monitor Logs.
+- Can only be used with Azure resources
+- Limited ability to send data to Azure Monitor Logs
## Telegraf agent
-The [InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md) is used to collect performance data from Linux computers to Azure Monitor Metrics.
+The [InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md) is used to collect performance data from Linux computers to send to Azure Monitor Metrics.
-Use Telegraf agent if you need to:
+Use the Telegraf agent if you need to:
-* Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [metrics explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Linux only).
+* Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [Metrics Explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Linux only).
## Dependency agent
-The Dependency agent collects discovered data about processes running on the machine and external process dependencies.
+The Dependency agent collects discovered data about processes running on the machine and external process dependencies.
Use the Dependency agent if you need to: * Use the Map feature [VM insights](../vm/vminsights-overview.md) or the [Service Map](../vm/service-map.md) solution.
-Consider the following when using the Dependency agent:
+Consider the following factors when you use the Dependency agent:
- The Dependency agent requires the Log Analytics agent to be installed on the same machine.-- On Linux computers, the Log Analytics agent must be installed before the Azure Diagnostic Extension.-- On both the Windows and Linux versions of the Dependency Agent, data collection is done using a user-space service and a kernel driver.
+- On Linux computers, the Log Analytics agent must be installed before the Azure Diagnostics extension.
+- On both the Windows and Linux versions of the Dependency agent, data collection is done by using a user-space service and a kernel driver.
## Virtual machine extensions
-The [Azure Monitor agent](./azure-monitor-agent-manage.md#virtual-machine-extension-details) is only available as a virtual machine extension. The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) install the Log Analytics agent on Azure virtual machines. The Azure Monitor Dependency extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md) install the Dependency agent on Azure virtual machines. These are the same agents described above but allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
+The [Azure Monitor agent](./azure-monitor-agent-manage.md#virtual-machine-extension-details) is only available as a virtual machine extension. The Log Analytics extension for [Windows](../../virtual-machines/extensions/oms-windows.md) and [Linux](../../virtual-machines/extensions/oms-linux.md) installs the Log Analytics agent on Azure virtual machines. The Azure Monitor Dependency extension for [Windows](../../virtual-machines/extensions/agent-dependency-windows.md) and [Linux](../../virtual-machines/extensions/agent-dependency-linux.md) installs the Dependency agent on Azure virtual machines. These are the same agents previously described, but they allow you to manage them through [virtual machine extensions](../../virtual-machines/extensions/overview.md). You should use extensions to install and manage the agents whenever possible.
-On hybrid machines, use [Azure Arc-enabled servers](../../azure-arc/servers/manage-vm-extensions.md) to deploy the Azure Monitor agent, Log Analytics and Azure Monitor Dependency VM extensions.
+On hybrid machines, use [Azure Arc-enabled servers](../../azure-arc/servers/manage-vm-extensions.md) to deploy the Azure Monitor agent, Log Analytics, and Azure Monitor Dependency VM extensions.
## Supported operating systems
The following tables list the operating systems that are supported by the Azure
| Windows 7 SP1<br>(Server scenarios only<sup>1</sup>) | | X | X | | | Azure Stack HCI | | X | | |
-<sup>1</sup> Running the OS on server hardware, i.e. machines that are always connected, always turned on, and not running other workloads (PC, office, browser, etc.)
+<sup>1</sup> Running the OS on server hardware, for example, machines that are always connected, always turned on, and not running other workloads (PC, office, browser)<br>
<sup>2</sup> Using the Azure Monitor agent [client installer (preview)](./azure-monitor-agent-windows-client.md)+ ### Linux > [!NOTE]
-> For Dependency Agent, please additionally check for supported kernel versions. See "Dependency agent Linux kernel support" table below for details
-
+> For Dependency agent, check for supported kernel versions. For more information, see the "Dependency agent Linux kernel support" table.
-| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Dependency agent | Diagnostics extension <sup>2</sup>|
+| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent <sup>1</sup> | Dependency agent | Diagnostics extension <sup></sup>|
|:|::|::|::|:: | AlmaLinux | X | X | | | | Amazon Linux 2017.09 | | X | | | | Amazon Linux 2 | | X | | |
-| CentOS Linux 8 | X <sup>3</sup> | X | X | |
+| CentOS Linux 8 | X <sup>2</sup> | X | X | |
| CentOS Linux 7 | X | X | X | X | | CentOS Linux 6 | | X | | | | CentOS Linux 6.5+ | | X | X | X | | Debian 11 <sup>1</sup> | X | | | | | Debian 10 <sup>1</sup> | X | | | |
-| Debian 9 | X | X | x | X |
+| Debian 9 | X | X | X | X |
| Debian 8 | | X | X | | | Debian 7 | | | | X | | OpenSUSE 13.1+ | | | | X |
-| Oracle Linux 8 | X <sup>3</sup> | X | | |
+| Oracle Linux 8 | X <sup>2</sup> | X | | |
| Oracle Linux 7 | X | X | | X | | Oracle Linux 6 | | X | | | | Oracle Linux 6.4+ | | X | | X | | Red Hat Enterprise Linux Server 8.5, 8.6 | X | X | | |
-| Red Hat Enterprise Linux Server 8, 8.1, 8.2, 8.3, 8.4 | X <sup>3</sup> | X | X | |
+| Red Hat Enterprise Linux Server 8, 8.1, 8.2, 8.3, 8.4 | X <sup>2</sup> | X | X | |
| Red Hat Enterprise Linux Server 7 | X | X | X | X | | Red Hat Enterprise Linux Server 6 | | X | X | | | Red Hat Enterprise Linux Server 6.7+ | | X | X | X | | Rocky Linux | X | X | | |
-| SUSE Linux Enterprise Server 15.2 | X <sup>3</sup> | | | |
-| SUSE Linux Enterprise Server 15.1 | X <sup>3</sup> | X | | |
+| SUSE Linux Enterprise Server 15.2 | X <sup>2</sup> | | | |
+| SUSE Linux Enterprise Server 15.1 | X <sup>2</sup> | X | | |
| SUSE Linux Enterprise Server 15 SP1 | X | X | X | | | SUSE Linux Enterprise Server 15 | X | X | X | | | SUSE Linux Enterprise Server 12 SP5 | X | X | X | X | | SUSE Linux Enterprise Server 12 | X | X | X | X | | Ubuntu 22.04 LTS | X | | | |
-| Ubuntu 20.04 LTS | X | X | X | X <sup>4</sup> |
+| Ubuntu 20.04 LTS | X | X | X | X <sup>3</sup> |
| Ubuntu 18.04 LTS | X | X | X | X | | Ubuntu 16.04 LTS | X | X | X | X | | Ubuntu 14.04 LTS | | X | | X |
-<sup>1</sup> Requires Python (2 or 3) to be installed on the machine.
-
-<sup>3</sup> Known issue collecting Syslog events in versions prior to 1.9.0.
-
-<sup>4</sup> Not all kernel versions are supported, check supported kernel versions below.
+<sup>1</sup> Requires Python (2 or 3) to be installed on the machine.<br>
+<sup>2</sup> Known issue collecting Syslog events in versions prior to 1.9.0.<br>
+<sup>3</sup> Not all kernel versions are supported. Check the supported kernel versions in the following table.
#### Dependency agent Linux kernel support
-Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
+Because the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.*, the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent.
| Distribution | OS version | Kernel version | |:|:|:|
Since the Dependency agent works at the kernel level, support is also dependent
## Next steps
-Get more details on each of the agents at the following:
+For more information on each of the agents, see the following articles:
- [Overview of the Azure Monitor agent](./azure-monitor-agent-overview.md) - [Overview of the Log Analytics agent](./log-analytics-agent.md)
azure-monitor Azure Monitor Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-overview.md
description: Overview of the Azure Monitor agent, which collects monitoring data
Previously updated : 5/19/2022 Last updated : 7/21/2022 # Azure Monitor agent overview
-The Azure Monitor agent (AMA) collects monitoring data from the guest operating system of [supported infrastucture](#supported-resource-types) and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and how to configure data collection.
-Here's an **introductory video** explaining all about this new agent, including a quick demo of how to set things up using the Azure portal: [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs)
+The Azure Monitor agent collects monitoring data from the guest operating system of [supported infrastructure](#supported-resource-types) and delivers it to Azure Monitor. This article provides an overview of the Azure Monitor agent and includes information on how to install it and configure data collection.
+If you're new to Azure Monitor, the recommendation is to use the Azure Monitor agent.
+
+For an introductory video that explains this new agent and includes a quick demo of how to set things up by using the Azure portal, see [ITOps Talk: Azure Monitor Agent](https://www.youtube.com/watch?v=f8bIrFU8tCs).
+ ## Relationship to other agents
-Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor.
+
+Eventually, the Azure Monitor agent will replace the following legacy monitoring agents that are currently used by Azure Monitor:
- [Log Analytics agent](./log-analytics-agent.md): Sends data to a Log Analytics workspace and supports VM insights and monitoring solutions. - [Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md): Sends data to Azure Monitor Metrics (Linux only). - [Diagnostics extension](./diagnostics-extension-overview.md): Sends data to Azure Monitor Metrics (Windows only), Azure Event Hubs, and Azure Storage.
-**Currently**, the Azure Monitor agent consolidates features from the Telegraf agent and Log Analytics agent, with [a few limitations](#current-limitations).
-In future, it will also consolidate features from the Diagnostic extensions.
+Currently, the Azure Monitor agent consolidates features from the Telegraf agent and Log Analytics agent, with [a few limitations](#current-limitations). See migration guidance [here](azure-monitor-agent-migration.md).
+In the future, it will also consolidate features from the Diagnostic extensions.
In addition to consolidating this functionality into a single agent, the Azure Monitor agent provides the following benefits over the existing agents: - **Cost savings:**
- - Granular targeting via [Data Collection Rules](../essentials/data-collection-rule-overview.md) to collect specific data types from specific machines, as compared to the "all or nothing" mode that Log Analytics agent supports
- - Use XPath queries to filter Windows events that get collected. This helps further reduce ingestion and storage costs.
-- **Simplified management of data collection:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (i.e. "multi-homing") and/or other [supported destinations](#data-sources-and-destinations). Additionally, every action across the data collection lifecycle, from onboarding to deployment to updates, is significantly easier, scalable, and centralized (in Azure) using data collection rules-- **Management of dependent solutions or -- **Security and performance** - For authentication and security, it uses Managed Identity (for virtual machines) and AAD device tokens (for clients) which are both much more secure and ΓÇÿhack proofΓÇÖ than certificates or workspace keys that legacy agents use. This agent performs better at higher EPS (events per second upload rate) compared to legacy agents.-
+ - Granular targeting via [data collection rules](../essentials/data-collection-rule-overview.md) to collect specific data types from specific machines, as compared to the "all or nothing" mode that Log Analytics agent supports.
+ - XPath queries to filter Windows events get collected to help further reduce ingestion and storage costs.
+- **Simplified management of data collection:** Send data from Windows and Linux VMs to multiple Log Analytics workspaces (for example, "multihoming") or other [supported destinations](#data-sources-and-destinations). Every action across the data collection lifecycle, from onboarding to deployment to updates, is easier, scalable, and centralized in Azure by using data collection rules.
+- **Management of dependent solutions or
+- **Security and performance:** For authentication and security, the Azure Monitor agent uses Managed Identity for virtual machines and Azure Active Directory device tokens for clients. Both technologies are much more secure and "hack proof" than certificates or workspace keys that legacy agents use. This agent performs better at higher events-per-second upload rates compared to legacy agents.
### Current limitations+ Not all Log Analytics solutions are supported yet. [View supported features and services](#supported-services-and-features). ### Changes in data collection
-The methods for defining data collection for the existing agents are distinctly different from each other. Each method has challenges that are addressed with the Azure Monitor agent.
+
+The methods for defining data collection for the existing agents are distinctly different from each other. Each method has challenges that are addressed with the Azure Monitor agent:
- The Log Analytics agent gets its configuration from a Log Analytics workspace. It's easy to centrally configure but difficult to define independent definitions for different virtual machines. It can only send data to a Log Analytics workspace. - Diagnostic extension has a configuration for each virtual machine. It's easy to define independent definitions for different virtual machines but difficult to centrally manage. It can only send data to Azure Monitor Metrics, Azure Event Hubs, or Azure Storage. For Linux agents, the open-source Telegraf agent is required to send data to Azure Monitor Metrics.
-The Azure Monitor agent uses [data collection rules](../essentials/data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They're independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments. See [Configure data collection for the Azure Monitor agent](data-collection-rule-azure-monitor-agent.md).
+The Azure Monitor agent uses [data collection rules](../essentials/data-collection-rule-overview.md) to configure data to collect from each agent. Data collection rules enable manageability of collection settings at scale while still enabling unique, scoped configurations for subsets of machines. They're independent of the workspace and independent of the virtual machine, which allows them to be defined once and reused across machines and environments.
-## Should I switch to the Azure Monitor agent?
-To start transitioning your VMs off the current agents to the new agent, consider the following factors:
+For more information, see [Configure data collection for the Azure Monitor agent](data-collection-rule-azure-monitor-agent.md).
-- **Environment requirements:** The Azure Monitor agent supports [these operating systems](./agents-overview.md#supported-operating-systems) today. Support for future operating system versions, environment support, and networking requirements will only be provided in this new agent. If the Azure Monitor agent supports your current environment, start transitioning to it.--- **Current and new feature requirements:** The Azure Monitor agent introduces several new capabilities, such as filtering, scoping, and multi-homing. But it isn't at parity yet with the current agents for other functionality. View [current limitations](#current-limitations) and [supported solutions](#supported-services-and-features).
+## Coexistence with other agents
- That said, most new capabilities in Azure Monitor will be made available only with the Azure Monitor agent. Review whether the Azure Monitor agent has the features you require and if there are some features that you can temporarily do without to get other important features in the new agent.
+The Azure Monitor agent can coexist (run side by side on the same machine) with the legacy Log Analytics agents so that you can continue to use their existing functionality during evaluation or migration. For this reason, you can begin transition even with the limitations, but you must review the following points carefully:
- If the Azure Monitor agent has all the core capabilities you require, start transitioning to it. If there are critical features that you require, continue with the current agent until the Azure Monitor agent reaches parity.
-- **Tolerance for rework:** If you're setting up a new environment with resources such as deployment scripts and onboarding templates, assess the effort involved. If the setup will take a significant amount of work, consider setting up your new environment with the new agent as it's now generally available.
+- Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards, or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents.
- Azure Monitor's Log Analytics agent is retiring on 31 August 2024. The current agents will be supported until the retirement date.
+ If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data. As a result, ensure you're not collecting the same data from both agents. If you are, ensure they're *collecting from different machines* or *going to separate destinations*.
+- Besides data duplication, this scenario would also generate more charges for data ingestion and retention.
+- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space, and network bandwidth.
-## Coexistence with other agents
-The Azure Monitor agent can coexist (run side by side on the same machine) with the legacy Log Analytics agents so that you can continue to use their existing functionality during evaluation or migration. While this allows you to begin transition given the limitations, you must review the below points carefully:
-- Be careful in collecting duplicate data because it could skew query results and affect downstream features like alerts, dashboards or workbooks. For example, VM insights uses the Log Analytics agent to send performance data to a Log Analytics workspace. You might also have configured the workspace to collect Windows events and Syslog events from agents. If you install the Azure Monitor agent and create a data collection rule for these same events and performance data, it will result in duplicate data. As such, ensure you're not collecting the same data from both agents. If you are, ensure they're **collecting from different machines** or **going to separate destinations**.-- Besides data duplication, this would also generate more charges for data ingestion and retention.-- Running two telemetry agents on the same machine would result in double the resource consumption, including but not limited to CPU, memory, storage space and network bandwidth. > [!NOTE]
-> When using both agents during evaluation or migration, you can use the **'Category'** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for 'Azure Monitor Agent'.
+> When you use both agents during evaluation or migration, you can use the **Category** column of the [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat) table in your Log Analytics workspace, and filter for **Azure Monitor Agent**.
## Supported resource types
-| Resource type | Installation method | Additional information |
+| Resource type | Installation method | More information |
|:|:|:|
-| Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent using Azure extension framework |
-| On-premises servers (Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing [Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent using Azure extension framework, provided for on-premises by first installing [Arc agent](../../azure-arc/servers/deployment-options.md) |
-| Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer |
-| Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent using a Windows MSI installer. The installs works on laptops but the agent is **not optimized yet** for battery, network consumption |
+| Virtual machines, scale sets | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) | Installs the agent by using Azure extension framework. |
+| On-premises servers (Azure Arc-enabled servers) | [Virtual machine extension](./azure-monitor-agent-manage.md#virtual-machine-extension-details) (after installing the [Azure Arc agent](../../azure-arc/servers/deployment-options.md)) | Installs the agent by using Azure extension framework, provided for on-premises by first installing [Azure Arc agent](../../azure-arc/servers/deployment-options.md). |
+| Windows 10, 11 desktops, workstations | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. |
+| Windows 10, 11 laptops | [Client installer (preview)](./azure-monitor-agent-windows-client.md) | Installs the agent by using a Windows MSI installer. The installer works on laptops, but the agent is *not optimized yet* for battery or network consumption. |
## Supported regions
-Azure Monitor agent is available in all public regions and Azure Government clouds. It is not yet supported in Air-gapped clouds. See here for [product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
+
+Azure Monitor agent is available in all public regions and Azure Government clouds. It's not yet supported in air-gapped clouds. For more information, see [Product availability by region](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&rar=true&regions=all).
## Supported operating systems+ For a list of the Windows and Linux operating system versions that are currently supported by the Azure Monitor agent, see [Supported operating systems](agents-overview.md#supported-operating-systems). ## Data sources and destinations+ The following table lists the types of data you can currently collect with the Azure Monitor agent by using data collection rules and where you can send that data. For a list of insights, solutions, and other solutions that use the Azure Monitor agent to collect other kinds of data, see [What is monitored by Azure Monitor?](../monitor-reference.md). The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log Analytics workspace supporting Azure Monitor Logs.
The Azure Monitor agent sends data to Azure Monitor Metrics (preview) or a Log A
| Performance | Azure Monitor Metrics (preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads | | Windows event logs | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system | | Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system |
-| Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine. |
+| Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine |
-<sup>1</sup> [Click here](../essentials/metrics-custom-overview.md#quotas-and-limits) to review other limitations of using Azure Monitor Metrics. On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.
-<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including **Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee and CEF (Common Event Format)**.
+<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [Quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v1.10.9.0 or higher.<br>
+<sup>2</sup> Azure Monitor Linux Agent v1.15.2 or higher supports syslog RFC formats including Cisco Meraki, Cisco ASA, Cisco FTD, Sophos XG, Juniper Networks, Corelight Zeek, CipherTrust, NXLog, McAfee, and Common Event Format (CEF).
## Supported services and features+ The following table shows the current support for the Azure Monitor agent with other Azure services. | Azure service | Current support | More information | |:|:|:| | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Private preview | [Sign-up link](https://aka.ms/AMAgent) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows DNS logs: Private preview</li><li>Linux Syslog CEF (Common Event Format): Private preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
+| [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows DNS logs: Private preview</li><li>Linux Syslog CEF: Private preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | <ul><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>[Sign-up link](https://aka.ms/AMAgent)</li><li>No sign-up needed </li><li>No sign-up needed</li></ul> |
The following table shows the current support for the Azure Monitor agent with Azure Monitor features.
The following table shows the current support for the Azure Monitor agent with A
| Solution | Current support | More information | |:|:|:| | [Change Tracking](../../automation/change-tracking/overview.md) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud Private Preview. | [Sign-up link](https://aka.ms/AMAgent) |
-| [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 (Private Preview) that doesn't require an agent. | [Sign-up link](https://www.yammer.com/azureadvisors/threads/1064001355087872) |
+| [Update Management](../../automation/update-management/overview.md) | Use Update Management v2 - Public preview | [Update management center (preview) documentation](/azure/update-center/) |
## Costs
-There's no cost for the Azure Monitor agent, but you might incur charges for the data ingested. For details on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+There's no cost for the Azure Monitor agent, but you might incur charges for the data ingested. For information on Log Analytics data collection and retention and for customer metrics, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
## Security+ The Azure Monitor agent doesn't require any keys but instead requires a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). You must have a system-assigned managed identity enabled on each virtual machine before you deploy the agent. ## Networking
-The Azure Monitor agent supports Azure service tags (both *AzureMonitor* and *AzureResourceManager* tags are required). It supports connecting via **direct proxies, Log Analytics gateway, and private links** as described below.
+
+The Azure Monitor agent supports Azure service tags. Both *AzureMonitor* and *AzureResourceManager* tags are required. It supports connecting via *direct proxies, Log Analytics gateway, and private links* as described in the following sections.
### Firewall requirements+ | Cloud |Endpoint |Purpose |Port |Direction |Bypass HTTPS inspection| |||||--|--| | Azure Commercial |global.handler.control.monitor.azure.com |Access control service|Port 443 |Outbound|Yes | | Azure Commercial |`<virtual-machine-region-name>`.handler.control.monitor.azure.com |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes | | Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure Government |global.handler.control.monitor.azure.us |Access control service|Port 443 |Outbound|Yes |
-| Azure Government |`<virtual-machine-region-name>`.handler.control.monitor.azure.us |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure Government |`<log-analytics-workspace-id>`.ods.opinsights.azure.us |Ingest logs data |Port 443 |Outbound|Yes |
-| Azure China |global.handler.control.monitor.azure.cn |Access control service|Port 443 |Outbound|Yes |
-| Azure China |`<virtual-machine-region-name>`.handler.control.monitor.azure.cn |Fetch data collection rules for specific machine |Port 443 |Outbound|Yes |
-| Azure China |`<log-analytics-workspace-id>`.ods.opinsights.azure.cn |Ingest logs data |Port 443 |Outbound|Yes |
-
+| Azure Commercial | management.azure.com | Only needed if sending timeseries data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes |
+| Azure Government | Replace '.com' above with '.us' | Same as above | Same as above | Same as above| Same as above |
+| Azure China | Replace '.com' above with '.cn' | Same as above | Same as above | Same as above| Same as above |
-If using private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint)
+If you use private links on the agent, you must also add the [DCE endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).
### Proxy configuration
-If the machine connects through a proxy server to communicate over the internet, review requirements below to understand the network configuration required.
+
+If the machine connects through a proxy server to communicate over the internet, review the following requirements to understand the network configuration required.
The Azure Monitor agent extensions for Windows and Linux can communicate either through a proxy server or a [Log Analytics gateway](./gateway.md) to Azure Monitor by using the HTTPS protocol. Use it for Azure virtual machines, Azure virtual machine scale sets, and Azure Arc for servers. Use the extensions settings for configuration as described in the following steps. Both anonymous and basic authentication by using a username and password are supported. > [!IMPORTANT]
-> Proxy configuration is not supported for [Azure Monitor Metrics (preview)](../essentials/metrics-custom-overview.md) as a destination. As such, if you are sending metrics to this destination, it will use the public internet without any proxy.
+> Proxy configuration is not supported for [Azure Monitor Metrics (preview)](../essentials/metrics-custom-overview.md) as a destination. If you're sending metrics to this destination, it will use the public internet without any proxy.
1. Use this flowchart to determine the values of the *settings* and *protectedSettings* parameters first.
- ![Flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
+ ![Diagram that shows a flowchart to determine the values of settings and protectedSettings parameters when you enable the extension.](media/azure-monitor-agent-overview/proxy-flowchart.png)
-2. After the values for the *settings* and *protectedSettings* parameters are determined, **provide these additional parameters** when you deploy the Azure Monitor agent by using PowerShell commands. Refer to the following examples.
+1. After the values for the *settings* and *protectedSettings* parameters are determined, *provide these additional parameters* when you deploy the Azure Monitor agent by using PowerShell commands. Refer to the following examples.
# [Windows VM](#tab/PowerShellWindows)
Set-AzVMExtension -ExtensionName AzureMonitorWindowsAgent -ExtensionType AzureMo
``` # [Linux VM](#tab/PowerShellLinux)+ ```powershell $settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}'; $protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[pass
Set-AzVMExtension -ExtensionName AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion 1.5 -SettingString $settingsString -ProtectedSettingString $protectedSettingsString ```
-# [Windows Arc enabled server](#tab/PowerShellWindowsArc)
+# [Windows Arc-enabled server](#tab/PowerShellWindowsArc)
+ ```powershell $settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}'; $protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
$protectedSettingsString = '{"proxy":{"username":"[username]","password": "[pass
New-AzConnectedMachineExtension -Name AzureMonitorWindowsAgent -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location> -Setting $settingsString -ProtectedSetting $protectedSettingsString ```
-# [Linux Arc enabled server](#tab/PowerShellLinuxArc)
+# [Linux Arc-enabled server](#tab/PowerShellLinuxArc)
+ ```powershell $settingsString = '{"proxy":{"mode":"application","address":"http://[address]:[port]","auth": true}}'; $protectedSettingsString = '{"proxy":{"username":"[username]","password": "[password]"}}';
New-AzConnectedMachineExtension -Name AzureMonitorLinuxAgent -ExtensionType Azur
### Log Analytics gateway configuration
-1. Follow the instructions above to configure proxy settings on the agent and provide the IP address and port number corresponding to the gateway server. If you have deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
-2. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
+
+1. Follow the preceding instructions to configure proxy settings on the agent and provide the IP address and port number that corresponds to the gateway server. If you've deployed multiple gateway servers behind a load balancer, the agent proxy configuration is the virtual IP address of the load balancer instead.
+1. Add the **configuration endpoint URL** to fetch data collection rules to the allowlist for the gateway
`Add-OMSGatewayAllowedHost -Host global.handler.control.monitor.azure.com`
- `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`
- (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
-3. Add the **data ingestion endpoint URL** to the allowlist for the gateway
- `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`
-3. Restart the **OMS Gateway** service to apply the changes
+ `Add-OMSGatewayAllowedHost -Host <gateway-server-region-name>.handler.control.monitor.azure.com`.
+ (If you're using private links on the agent, you must also add the [data collection endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint).)
+1. Add the **data ingestion endpoint URL** to the allowlist for the gateway
+ `Add-OMSGatewayAllowedHost -Host <log-analytics-workspace-id>.ods.opinsights.azure.com`.
+1. Restart the **OMS Gateway** service to apply the changes
`Stop-Service -Name <gateway-name>`
- `Start-Service -Name <gateway-name>`
+ `Start-Service -Name <gateway-name>`.
### Private link configuration
-To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
+
+To configure the agent to use private links for network communications with Azure Monitor, follow instructions to [enable network isolation](./azure-monitor-agent-data-collection-endpoint.md#enable-network-isolation-for-the-azure-monitor-agent) by using [data collection endpoints](azure-monitor-agent-data-collection-endpoint.md).
## Next steps
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md
Title: Collect text logs with Log Analytics agent in Azure Monitor
-description: Azure Monitor can collect events from text files on both Windows and Linux computers. This article describes how to define a new custom log and details of the records they create in Azure Monitor.
+ Title: Collect text logs with the Log Analytics agent in Azure Monitor
+description: Azure Monitor can collect events from text files on both Windows and Linux computers. This article describes how to define a new custom log and details of the records they create in Azure Monitor.
-# Collect text logs with Log Analytics agent in Azure Monitor
+# Collect text logs with the Log Analytics agent in Azure Monitor
-The Custom Logs data source for the Log Analytics agent in Azure Monitor allows you to collect events from text files on both Windows and Linux computers. Many applications log information to text files instead of standard logging services such as Windows Event log or Syslog. Once collected, you can either parse the data into individual fields in your queries or extract the data during collection to individual fields.
+The Custom Logs data source for the Log Analytics agent in Azure Monitor allows you to collect events from text files on both Windows and Linux computers. Many applications log information to text files instead of standard logging services, such as Windows Event log or Syslog. After the data is collected, you can either parse it into individual fields in your queries or extract it during collection to individual fields.
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
-![Custom log collection](media/data-sources-custom-logs/overview.png)
+![Diagram that shows custom log collection.](media/data-sources-custom-logs/overview.png)
-The log files to be collected must match the following criteria.
+The log files to be collected must match the following criteria:
-- The log must either have a single entry per line or use a timestamp matching one of the following formats at the start of each entry.
+- The log must either have a single entry per line or use a timestamp matching one of the following formats at the start of each entry:
YYYY-MM-DD HH:MM:SS<br>M/D/YYYY HH:MM:SS AM/PM<br>Mon DD, YYYY HH:MM:SS<br />yyMMdd HH:mm:ss<br />ddMMyy HH:mm:ss<br />MMM d hh:mm:ss<br />dd/MMM/yyyy:HH:mm:ss zzz<br />yyyy-MM-ddTHH:mm:ssK -- The log file must not allow circular logging, log rotation where the file is overwritten with new entries, or the file is renamed and the same file name is reused for continued logging. -- The log file must use ASCII or UTF-8 encoding. Other formats such as UTF-16 are not supported.-- For Linux, time zone conversion is not supported for time stamps in the logs.-- As a best practice, the log file should include the date time that it was created to prevent log rotation overwriting or renaming.
+- The log file must not allow circular logging. This behavior is log rotation where the file is overwritten with new entries or the file is renamed and the same file name is reused for continued logging.
+- The log file must use ASCII or UTF-8 encoding. Other formats such as UTF-16 aren't supported.
+- For Linux, time zone conversion isn't supported for time stamps in the logs.
+- As a best practice, the log file should include the date and time that it was created to prevent log rotation overwriting or renaming.
>[!NOTE]
-> If there are duplicate entries in the log file, Azure Monitor will collect them. However, the query results will be inconsistent where the filter results show more events than the result count. It will be important that you validate the log to determine if the application that creates it is causing this behavior and address it if possible before creating the custom log collection definition.
->
+> If there are duplicate entries in the log file, Azure Monitor will collect them. The query results that are generated will be inconsistent. The filter results will show more events than the result count. You must validate the log to determine if the application that creates it is causing this behavior. Address the issue, if possible, before you create the custom log collection definition.
->[!NOTE]
-> A Log Analytics workspace supports the following limits:
->
-> * Only 500 custom logs can be created.
-> * A table only supports up to 500 columns.
-> * The maximum number of characters for the column name is 500.
->
+A Log Analytics workspace supports the following limits:
+
+* Only 500 custom logs can be created.
+* A table only supports up to 500 columns.
+* The maximum number of characters for the column name is 500.
>[!IMPORTANT] >Custom log collection requires that the application writing the log file flushes the log content to the disk periodically. This is because the custom log collection relies on filesystem change notifications for the log file being tracked.
-## Defining a custom log
-Use the following procedure to define a custom log file. Scroll to the end of this article for a walkthrough of a sample of adding a custom log.
+## Define a custom log
+
+Use the following procedure to define a custom log file. Scroll to the end of this article for a walkthrough of a sample of adding a custom log.
+
+### Open the Custom Log wizard
-### Step 1. Open the Custom Log Wizard
-The Custom Log Wizard runs in the Azure portal and allows you to define a new custom log to collect.
+The Custom Log wizard runs in the Azure portal and allows you to define a new custom log to collect.
1. In the Azure portal, select **Log Analytics workspaces** > your workspace > **Settings**.
-2. Click on **Custom logs**.
-3. By default, all configuration changes are automatically pushed to all agents. For Linux agents, a configuration file is sent to the Fluentd data collector.
-4. Click **Add+** to open the Custom Log Wizard.
+1. Select **Custom logs**.
+1. By default, all configuration changes are automatically pushed to all agents. For Linux agents, a configuration file is sent to the Fluentd data collector.
+1. Select **Add** to open the Custom Log wizard.
-### Step 2. Upload and parse a sample log
-You start by uploading a sample of the custom log. The wizard will parse and display the entries in this file for you to validate. Azure Monitor will use the delimiter that you specify to identify each record.
+### Upload and parse a sample log
+
+To start, upload a sample of the custom log. The wizard will parse and display the entries in this file for you to validate. Azure Monitor will use the delimiter that you specify to identify each record.
-**New Line** is the default delimiter and is used for log files that have a single entry per line. If the line starts with a date and time in one of the available formats, then you can specify a **Timestamp** delimiter which supports entries that span more than one line.
+**New Line** is the default delimiter and is used for log files that have a single entry per line. If the line starts with a date and time in one of the available formats, you can specify a **Timestamp** delimiter, which supports entries that span more than one line.
-If a timestamp delimiter is used, then the TimeGenerated property of each record stored in Azure Monitor will be populated with the date/time specified for that entry in the log file. If a new line delimiter is used, then TimeGenerated is populated with date and time that Azure Monitor collected the entry.
+If a timestamp delimiter is used, the TimeGenerated property of each record stored in Azure Monitor will be populated with the date and time specified for that entry in the log file. If a new line delimiter is used, TimeGenerated is populated with the date and time when Azure Monitor collected the entry.
-1. Click **Browse** and browse to a sample file. Note that this may button may be labeled **Choose File** in some browsers.
-2. Click **Next**.
-3. The Custom Log Wizard will upload the file and list the records that it identifies.
-4. Change the delimiter that is used to identify a new record and select the delimiter that best identifies the records in your log file.
-5. Click **Next**.
+1. Select **Browse** and browse to a sample file. This button might be labeled **Choose File** in some browsers.
+1. Select **Next**.
+1. The Custom Log wizard uploads the file and lists the records that it identifies.
+1. Change the delimiter that's used to identify a new record. Select the delimiter that best identifies the records in your log file.
+1. Select **Next**.
-### Step 3. Add log collection paths
-You must define one or more paths on the agent where it can locate the custom log. You can either provide a specific path and name for the log file, or you can specify a path with a wildcard for the name. This supports applications that create a new file each day or when one file reaches a certain size. You can also provide multiple paths for a single log file.
+### Add log collection paths
-For example, an application might create a log file each day with the date included in the name as in log20100316.txt. A pattern for such a log might be *log\*.txt* which would apply to any log file following the applicationΓÇÖs naming scheme.
+You must define one or more paths on the agent where it can locate the custom log. You can either provide a specific path and name for the log file or you can specify a path with a wildcard for the name. This step supports applications that create a new file each day or when one file reaches a certain size. You can also provide multiple paths for a single log file.
+
+For example, an application might create a log file each day with the date included in the name as in log20100316.txt. A pattern for such a log might be *log\*.txt*, which would apply to any log file following the application's naming scheme.
The following table provides examples of valid patterns to specify different log files. | Description | Path | |: |: |
-| All files in *C:\Logs* with .txt extension on Windows agent |C:\Logs\\\*.txt |
-| All files in *C:\Logs* with a name starting with log and a .txt extension on Windows agent |C:\Logs\log\*.txt |
-| All files in */var/log/audit* with .txt extension on Linux agent |/var/log/audit/*.txt |
-| All files in */var/log/audit* with a name starting with log and a .txt extension on Linux agent |/var/log/audit/log\*.txt |
+| All files in *C:\Logs* with .txt extension on the Windows agent |C:\Logs\\\*.txt |
+| All files in *C:\Logs* with a name starting with log and a .txt extension on the Windows agent |C:\Logs\log\*.txt |
+| All files in */var/log/audit* with .txt extension on the Linux agent |/var/log/audit/*.txt |
+| All files in */var/log/audit* with a name starting with log and a .txt extension on the Linux agent |/var/log/audit/log\*.txt |
+
+1. Select Windows or Linux to specify which path format you're adding.
+1. Enter the path and select the **+** button.
+1. Repeat the process for any more paths.
+
+### Provide a name and description for the log
-1. Select Windows or Linux to specify which path format you are adding.
-2. Type in the path and click the **+** button.
-3. Repeat the process for any additional paths.
+The name that you specify will be used for the log type as described. It will always end with _CL to distinguish it as a custom log.
-### Step 4. Provide a name and description for the log
-The name that you specify will be used for the log type as described above. It will always end with _CL to distinguish it as a custom log.
+1. Enter a name for the log. The **\_CL** suffix is automatically provided.
+1. Add an optional **Description**.
+1. Select **Next** to save the custom log definition.
-1. Type in a name for the log. The **\_CL** suffix is automatically provided.
-2. Add an optional **Description**.
-3. Click **Next** to save the custom log definition.
+### Validate that the custom logs are being collected
-### Step 5. Validate that the custom logs are being collected
-It may take up to an hour for the initial data from a new custom log to appear in Azure Monitor. It will start collecting entries from the logs found in the path you specified from the point that you defined the custom log. It will not retain the entries that you uploaded during the custom log creation, but it will collect already existing entries in the log files that it locates.
+It might take up to an hour for the initial data from a new custom log to appear in Azure Monitor. Azure Monitor will start collecting entries from the logs found in the path you specified from the point that you defined the custom log. It won't retain the entries that you uploaded during the custom log creation. It will collect already existing entries in the log files that it locates.
-Once Azure Monitor starts collecting from the custom log, its records will be available with a log query. Use the name that you gave the custom log as the **Type** in your query.
+After Azure Monitor starts collecting from the custom log, its records will be available with a log query. Use the name that you gave the custom log as the **Type** in your query.
> [!NOTE]
-> If the RawData property is missing from the query, you may need to close and reopen your browser.
+> If the RawData property is missing from the query, you might need to close and reopen your browser.
+
+### Parse the custom log entries
-### Step 6. Parse the custom log entries
-The entire log entry will be stored in a single property called **RawData**. You will most likely want to separate the different pieces of information in each entry into individual properties for each record. Refer to [Parse text data in Azure Monitor](../logs/parse-text.md) for options on parsing **RawData** into multiple properties.
+The entire log entry will be stored in a single property called **RawData**. You'll most likely want to separate the different pieces of information in each entry into individual properties for each record. For options on parsing **RawData** into multiple properties, see [Parse text data in Azure Monitor](../logs/parse-text.md).
+
+## Remove a custom log
-## Removing a custom log
Use the following process in the Azure portal to remove a custom log that you previously defined. 1. From the **Data** menu in the **Advanced Settings** for your workspace, select **Custom Logs** to list all your custom logs.
-2. Click **Remove** next to the custom log to remove.
+1. Select **Remove** next to the custom log to remove the log.
## Data collection
-Azure Monitor will collect new entries from each custom log approximately every 5 minutes. The agent will record its place in each log file that it collects from. If the agent goes offline for a period of time, then Azure Monitor will collect entries from where it last left off, even if those entries were created while the agent was offline.
-The entire contents of the log entry are written to a single property called **RawData**. See [Parse text data in Azure Monitor](../logs/parse-text.md) for methods to parse each imported log entry into multiple properties.
+Azure Monitor collects new entries from each custom log approximately every 5 minutes. The agent records its place in each log file that it collects from. If the agent goes offline for a period of time, Azure Monitor collects entries from where it last left off, even if those entries were created while the agent was offline.
+
+The entire contents of the log entry are written to a single property called **RawData**. For methods to parse each imported log entry into multiple properties, see [Parse text data in Azure Monitor](../logs/parse-text.md).
## Custom log record properties+ Custom log records have a type with the log name that you provide and the properties in the following table. | Property | Description | |: |: |
-| TimeGenerated |Date and time that the record was collected by Azure Monitor. If the log uses a time-based delimiter then this is the time collected from the entry. |
+| TimeGenerated |Date and time that the record was collected by Azure Monitor. If the log uses a time-based delimiter, this is the time collected from the entry. |
| SourceSystem |Type of agent the record was collected from. <br> OpsManager ΓÇô Windows agent, either direct connect or System Center Operations Manager <br> Linux ΓÇô All Linux agents |
-| RawData |Full text of the collected entry. You will most likely want to [parse this data into individual properties](../logs/parse-text.md). |
-| ManagementGroupName |Name of the management group for System Center Operations Manage agents. For other agents, this is AOI-\<workspace ID\> |
-
+| RawData |Full text of the collected entry. You'll most likely want to [parse this data into individual properties](../logs/parse-text.md). |
+| ManagementGroupName |Name of the management group for System Center Operations Manager agents. For other agents, this name is AOI-\<workspace ID\>. |
## Sample walkthrough of adding a custom log
-The following section walks through an example of creating a custom log. The sample log being collected has a single entry on each line starting with a date and time and then comma-delimited fields for code, status, and message. Several sample entries are shown below.
+
+The following section walks through an example of creating a custom log. The sample log being collected has a single entry on each line starting with a date and time and then comma-delimited fields for code, status, and message. Several sample entries are shown.
```output 2019-08-27 01:34:36 207,Success,Client 05a26a97-272a-4bc9-8f64-269d154b0e39 connected
The following section walks through an example of creating a custom log. The sa
``` ### Upload and parse a sample log
-We provide one of the log files and can see the events that it will be collecting. In this case New Line is a sufficient delimiter. If a single entry in the log could span multiple lines though, then a timestamp delimiter would need to be used.
-![Upload and parse a sample log](media/data-sources-custom-logs/delimiter.png)
+We provide one of the log files and can see the events that it will be collecting. In this case, **New line** is a sufficient delimiter. If a single entry in the log could span multiple lines though, a timestamp delimiter would need to be used.
+
+![Screenshot that shows uploading and parsing a sample log.](media/data-sources-custom-logs/delimiter.png)
### Add log collection paths
-The log files will be located in *C:\MyApp\Logs*. A new file will be created each day with a name that includes the date in the pattern *appYYYYMMDD.log*. A sufficient pattern for this log would be *C:\MyApp\Logs\\\*.log*.
-![Log collection path](media/data-sources-custom-logs/collection-path.png)
+The log files will be located in *C:\MyApp\Logs*. A new file will be created each day with a name that includes the date in the pattern *appYYYYMMDD.log*. A sufficient pattern for this log would be *C:\MyApp\Logs\\\*.log*.
+
+![Screenshot that shows adding a log collection path.](media/data-sources-custom-logs/collection-path.png)
### Provide a name and description for the log+ We use a name of *MyApp_CL* and type in a **Description**.
-![Log name](media/data-sources-custom-logs/log-name.png)
+![Screenshot that shows adding a log name.](media/data-sources-custom-logs/log-name.png)
### Validate that the custom logs are being collected
-We use a simple query of *MyApp_CL* to return all records from the collected log.
-![Log query with no custom fields](media/data-sources-custom-logs/query-01.png)
+We use a simple query of *MyApp_CL* to return all records from the collected log.
+![Screenshot that shows a log query with no custom fields.](media/data-sources-custom-logs/query-01.png)
## Alternatives to custom logs
-While custom logs are useful if your data fits the criteria listed above, there are cases such as the following where you need another strategy:
-- The data doesn't fit the required structure such as having the timestamp in a different format.
+While custom logs are useful if your data fits the criteria listed, there are cases where you need another strategy:
+
+- The data doesn't fit the required structure, such as having the timestamp in a different format.
- The log file doesn't adhere to requirements such as file encoding or an unsupported folder structure.-- The data requires preprocessing or filtering before collection.
+- The data requires preprocessing or filtering before collection.
In the cases where your data can't be collected with custom logs, consider the following alternate strategies: -- Use a custom script or other method to write data to [Windows Events](data-sources-windows-events.md) or [Syslog](data-sources-syslog.md) which are collected by Azure Monitor. -- Send the data directly to Azure Monitor using [HTTP Data Collector API](../logs/data-collector-api.md).
+- Use a custom script or other method to write data to [Windows Events](data-sources-windows-events.md) or [Syslog](data-sources-syslog.md), which are collected by Azure Monitor.
+- Send the data directly to Azure Monitor by using [HTTP Data Collector API](../logs/data-collector-api.md).
## Next steps+ * See [Parse text data in Azure Monitor](../logs/parse-text.md) for methods to parse each imported log entry into multiple properties. * Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
Title: Log Analytics agent overview
-description: This topic helps you understand how to collect data and monitor computers hosted in Azure, on-premises, or other cloud environment with Log Analytics.
+description: This article helps you understand how to collect data and monitor computers hosted in Azure, on-premises, or other cloud environments with Log Analytics.
# Log Analytics agent overview
-The Azure Log Analytics agent collects telemetry from Windows and Linux virtual machines in any cloud, on-premises machines, and those monitored by [System Center Operations Manager](/system-center/scom/) and sends collected data to your Log Analytics workspace in Azure Monitor. The Log Analytics agent also supports insights and other services in Azure Monitor such as [VM insights](../vm/vminsights-enable-overview.md), [Microsoft Defender for Cloud](../../security-center/index.yml), and [Azure Automation](../../automation/automation-intro.md). This article provides a detailed overview of the agent, system and network requirements, and deployment methods.
+The Azure Log Analytics agent collects telemetry from Windows and Linux virtual machines in any cloud, on-premises machines, and machines monitored by [System Center Operations Manager](/system-center/scom/). Collected data is sent to your Log Analytics workspace in Azure Monitor.
+
+The Log Analytics agent also supports insights and other services in Azure Monitor, such as [VM insights](../vm/vminsights-enable-overview.md), [Microsoft Defender for Cloud](../../security-center/index.yml), and [Azure Automation](../../automation/automation-intro.md). This article provides a detailed overview of the agent, system and network requirements, and deployment methods.
>[!IMPORTANT]
->The Log Analytics agent is on a **deprecation path** and won't be supported after **August 31, 2024**. If you use the Log Analytics agent to ingest data to Azure Monitor, make sure to [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
+>The Log Analytics agent is on a **deprecation path** and won't be supported after **August 31, 2024**. If you use the Log Analytics agent to ingest data to Azure Monitor, [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
-> [!NOTE]
-> You may also see the Log Analytics agent referred to as the Microsoft Monitoring Agent (MMA).
+You might also see the Log Analytics agent referred to as Microsoft Monitoring Agent (MMA).
## Comparison to other agents
-See [Overview of Azure Monitor agents](agents-overview.md) for a comparison between the Log Analytics and other agents in Azure Monitor.
+
+For a comparison between the Log Analytics and other agents in Azure Monitor, see [Overview of Azure Monitor agents](agents-overview.md).
## Supported operating systems
- See [Supported operating systems](../agents/agents-overview.md#supported-operating-systems) for a list of the Windows and Linux operating system versions that are supported by the Log Analytics agent.
+ For a list of the Windows and Linux operating system versions that are supported by the Log Analytics agent, see [Supported operating systems](../agents/agents-overview.md#supported-operating-systems).
## Installation options There are multiple methods to install the Log Analytics agent and connect your machine to Azure Monitor depending on your requirements. The following sections list the possible methods for different types of virtual machine. > [!NOTE]
-> It is not supported to clone a machine with the Log Analytics Agent already configured. If the agent has already been associated with a workspace this will not work for 'golden images'.
+> Cloning a machine with the Log Analytics Agent already configured is *not* supported. If the agent is already associated with a workspace, cloning won't work for "golden images."
### Azure virtual machine -- Use [VM insights](../vm/vminsights-enable-overview.md) to install the agent for a [single machine using the Azure portal](../vm/vminsights-enable-portal.md) or for [multiple machines at scale](../vm/vminsights-enable-policy.md). This will install the Log Analytics agent and [Dependency agent](agents-overview.md#dependency-agent). -- Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) can be installed with the Azure portal, Azure CLI, Azure PowerShell, or a Azure Resource Manager template.-- [Microsoft Defender for Cloud can provision the Log Analytics agent](../../security-center/security-center-enable-data-collection.md) on all supported Azure VMs and any new ones that are created if you enable it to monitor for security vulnerabilities and threats.-- Install for individual Azure virtual machines [manually from the Azure portal](../vm/monitor-virtual-machine.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
+- Use [VM insights](../vm/vminsights-enable-overview.md) to install the agent for a [single machine by using the Azure portal](../vm/vminsights-enable-portal.md) or for [multiple machines at scale](../vm/vminsights-enable-policy.md). This process will install the Log Analytics agent and [Dependency agent](agents-overview.md#dependency-agent).
+- Install the Log Analytics VM extension for [Windows](../../virtual-machines/extensions/oms-windows.md) or [Linux](../../virtual-machines/extensions/oms-linux.md) with the Azure portal, the Azure CLI, Azure PowerShell, or an Azure Resource Manager template.
+- Use [Microsoft Defender for Cloud to provision the Log Analytics agent](../../security-center/security-center-enable-data-collection.md) on all supported Azure VMs and any new ones that are created if you've enabled it to monitor for security vulnerabilities and threats.
+- Install individual Azure virtual machines [manually from the Azure portal](../vm/monitor-virtual-machine.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
- Connect the machine to a workspace from the **Virtual machines** option in the **Log Analytics workspaces** menu in the Azure portal. ### Windows virtual machine on-premises or in another cloud -- Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension. Review the [deployment options](../../azure-arc/servers/concept-log-analytics-extension-deployment.md) to understand the different deployment methods available for the extension on machines registered with Arc-enabled servers.
+- Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension. Review the [deployment options](../../azure-arc/servers/concept-log-analytics-extension-deployment.md) to understand the different deployment methods available for the extension on machines registered with Azure Arc-enabled servers.
- [Manually install](../agents/agent-windows.md) the agent from the command line. - Automate the installation with [Azure Automation DSC](../agents/agent-windows.md#install-agent-using-dsc-in-azure-automation).-- Use a [Resource Manager template with Azure Stack](https://github.com/Azure/AzureStack-QuickStart-Templates/tree/master/MicrosoftMonitoringAgent-ext-win)
+- Use a [Resource Manager template with Azure Stack](https://github.com/Azure/AzureStack-QuickStart-Templates/tree/master/MicrosoftMonitoringAgent-ext-win).
### Linux virtual machine on-premises or in another cloud -- Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension. Review the [deployment options](../../azure-arc/servers/concept-log-analytics-extension-deployment.md) to understand the different deployment methods available for the extension on machines registered with Arc-enabled servers.
+- Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) to deploy and manage the Log Analytics VM extension. Review the [deployment options](../../azure-arc/servers/concept-log-analytics-extension-deployment.md) to understand the different deployment methods available for the extension on machines registered with Azure Arc-enabled servers.
- [Manually install](../vm/monitor-virtual-machine.md) the agent calling a wrapper-script hosted on GitHub. - Integrate [System Center Operations Manager](./om-agents.md) with Azure Monitor to forward collected data from Windows computers reporting to a management group. ## Data collected
-The following table lists the types of data you can configure a Log Analytics workspace to collect from all connected agents. See [What is monitored by Azure Monitor?](../monitor-reference.md) for a list of insights, solutions, and other solutions that use the Log Analytics agent to collect other kinds of data.
+The following table lists the types of data you can configure a Log Analytics workspace to collect from all connected agents. For a list of insights and solutions that use the Log Analytics agent to collect other kinds of data, see [What is monitored by Azure Monitor?](../monitor-reference.md).
| Data Source | Description | | | |
-| [Windows Event logs](../agents/data-sources-windows-events.md) | Information sent to the Windows event logging system. |
-| [Syslog](../agents/data-sources-syslog.md) | Information sent to the Linux event logging system. |
-| [Performance](../agents/data-sources-performance-counters.md) | Numerical values measuring performance of different aspects of operating system and workloads. |
-| [IIS logs](../agents/data-sources-iis-logs.md) | Usage information for IIS web sites running on the guest operating system. |
-| [Custom logs](../agents/data-sources-custom-logs.md) | Events from text files on both Windows and Linux computers. |
-
+| [Windows Event logs](../agents/data-sources-windows-events.md) | Information sent to the Windows event logging system |
+| [Syslog](../agents/data-sources-syslog.md) | Information sent to the Linux event logging system |
+| [Performance](../agents/data-sources-performance-counters.md) | Numerical values measuring performance of different aspects of operating system and workloads |
+| [IIS logs](../agents/data-sources-iis-logs.md) | Usage information for IIS websites running on the guest operating system |
+| [Custom logs](../agents/data-sources-custom-logs.md) | Events from text files on both Windows and Linux computers |
## Other services
-The agent for Linux and Windows isn't only for connecting to Azure Monitor. Other services such as Microsoft Defender for Cloud and Microsoft Sentinel rely on the agent and its connected Log Analytics workspace. The agent also supports Azure Automation to host the Hybrid Runbook worker role and other services such as [Change Tracking](../../automation/change-tracking/overview.md), [Update Management](../../automation/update-management/overview.md), and [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md). For more information about the Hybrid Runbook Worker role, see [Azure Automation Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md).
+The agent for Linux and Windows isn't only for connecting to Azure Monitor. Other services such as Microsoft Defender for Cloud and Microsoft Sentinel rely on the agent and its connected Log Analytics workspace. The agent also supports Azure Automation to host the Hybrid Runbook Worker role and other services such as [Change Tracking](../../automation/change-tracking/overview.md), [Update Management](../../automation/update-management/overview.md), and [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md). For more information about the Hybrid Runbook Worker role, see [Azure Automation Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md).
## Workspace and management group limitations
-See [Configure agent to report to an Operations Manager management group](../agents/agent-manage.md#configure-agent-to-report-to-an-operations-manager-management-group) for details on connecting an agent to an Operations Manager management group.
+For details on connecting an agent to an Operations Manager management group, see [Configure agent to report to an Operations Manager management group](../agents/agent-manage.md#configure-agent-to-report-to-an-operations-manager-management-group).
-* Windows agents can connect to up to four workspaces, even if they are connected to a System Center Operations Manager management group.
-* The Linux agent does not support multi-homing and can only connect to a single workspace or management group.
+* Windows agents can connect to up to four workspaces, even if they're connected to a System Center Operations Manager management group.
+* The Linux agent doesn't support multi-homing and can only connect to a single workspace or management group.
## Security limitations
-* The Windows and Linux agents support the [FIPS 140 standard](/windows/security/threat-protection/fips-140-validation), but [other types of hardening may not be supported](../agents/agent-linux.md#supported-linux-hardening).
+The Windows and Linux agents support the [FIPS 140 standard](/windows/security/threat-protection/fips-140-validation), but [other types of hardening might not be supported](../agents/agent-linux.md#supported-linux-hardening).
## TLS 1.2 protocol
-To ensure the security of data in transit to Azure Monitor logs, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**. For additional information, review [Sending data securely using TLS 1.2](../logs/data-security.md#sending-data-securely-using-tls-12).
+To ensure the security of data in transit to Azure Monitor logs, we strongly encourage you to configure the agent to use at least Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable. Although they still currently work to allow backward compatibility, they are *not recommended*. For more information, see [Sending data securely using TLS 1.2](../logs/data-security.md#sending-data-securely-using-tls-12).
## Network requirements
-The agent for Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. If the machine connects through a firewall or proxy server to communicate over the Internet, review requirements below to understand the network configuration required. If your IT security policies do not allow computers on the network to connect to the Internet, you can set up a [Log Analytics gateway](gateway.md) and then configure the agent to connect through the gateway to Azure Monitor. The agent can then receive configuration information and send data collected.
+The agent for Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. If the machine connects through a firewall or proxy server to communicate over the internet, review the following requirements to understand the network configuration required. If your IT security policies do not allow computers on the network to connect to the internet, set up a [Log Analytics gateway](gateway.md) and configure the agent to connect through the gateway to Azure Monitor. The agent can then receive configuration information and send data collected.
-![Log Analytics agent communication diagram](./media/log-analytics-agent/log-analytics-agent-01.png)
+![Diagram that shows Log Analytics agent communication.](./media/log-analytics-agent/log-analytics-agent-01.png)
The following table lists the proxy and firewall configuration information required for the Linux and Windows agents to communicate with Azure Monitor logs.
The following table lists the proxy and firewall configuration information requi
|*.blob.core.windows.net |Port 443 |Outbound|Yes | |*.azure-automation.net |Port 443 |Outbound|Yes |
-For firewall information required for Azure Government, see [Azure Government management](../../azure-government/compare-azure-government-global-azure.md#azure-monitor).
+For firewall information required for Azure Government, see [Azure Government management](../../azure-government/compare-azure-government-global-azure.md#azure-monitor).
> [!IMPORTANT] > If your firewall is doing CNAME inspections, you need to configure it to allow all domains in the CNAME.
If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and
### Proxy configuration
-The Windows and Linux agent supports communicating either through a proxy server or Log Analytics gateway to Azure Monitor using the HTTPS protocol. Both anonymous and basic authentication (username/password) are supported. For the Windows agent connected directly to the service, the proxy configuration is specified during installation or [after deployment](../agents/agent-manage.md#update-proxy-settings) from Control Panel or with PowerShell. Log Analytics Agent (MMA) does not use the system proxy settings. Hence, user has to pass proxy setting while installing MMA and these settings will be stored under MMA configuration(registry) on VM.
+The Windows and Linux agent supports communicating either through a proxy server or Log Analytics gateway to Azure Monitor by using the HTTPS protocol. Both anonymous and basic authentication (username/password) are supported.
+
+For the Windows agent connected directly to the service, the proxy configuration is specified during installation or [after deployment](../agents/agent-manage.md#update-proxy-settings) from Control Panel or with PowerShell. Log Analytics Agent (MMA) doesn't use the system proxy settings. As a result, the user has to pass the proxy setting while installing MMA. These settings will be stored under MMA configuration (registry) on the virtual machine.
For the Linux agent, the proxy server is specified during installation or [after installation](../agents/agent-manage.md#update-proxy-settings) by modifying the proxy.conf configuration file. The Linux agent proxy configuration value has the following syntax:
For example:
`https://user01:password@proxy01.contoso.com:30443` > [!NOTE]
-> If you use special characters such as "\@" in your password, you receive a proxy connection error because value is parsed incorrectly. To work around this issue, encode the password in the URL using a tool such as [URLDecode](https://www.urldecoder.org/).
+> If you use special characters such as "\@" in your password, you'll receive a proxy connection error because the value is parsed incorrectly. To work around this issue, encode the password in the URL by using a tool like [URLDecode](https://www.urldecoder.org/).
## Next steps * Review [data sources](../agents/agent-data-sources.md) to understand the data sources available to collect data from your Windows or Linux system.
-* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
+* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
* Learn about [monitoring solutions](../insights/solutions.md) that add functionality to Azure Monitor and also collect data into the Log Analytics workspace.
azure-monitor Java In Process Agent Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent-redirect.md
+
+ Title: Azure Monitor Application Insights Java (redirect to OpenTelemetry)
+description: Redirect to OpenTelemetry agent
+ Last updated : 07/22/2022
+ms.devlang: java
+++++
+# Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications (redirect to OpenTelemetry)
+
+Whether you are deploying on-premises or in the cloud, you can use Microsoft's OpenTelemetry-based Java Auto-Instrumentation agent.
+
+For more information, see [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md#azure-monitor-opentelemetry-based-auto-instrumentation-for-java-applications).
+
+## Next steps
+
+- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](java-in-process-agent.md#azure-monitor-opentelemetry-based-auto-instrumentation-for-java-applications)
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Title: Azure Monitor Application Insights Java description: Application performance monitoring for Java applications running in any environment without requiring code modification. Distributed tracing and application map. Previously updated : 05/02/2022 Last updated : 07/22/2022 ms.devlang: java + # Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Connection strings provide a single configuration setting and eliminate the need
## Supported SDK Versions -- .NET and .NET Core [LTS](https://dotnet.microsoft.com/download/visual-studio-sdks)
+- .NET and .NET Core v2.12.0+
- Java v2.5.1 and Java 3.0+ - JavaScript v2.3.0+ - NodeJS v1.5.0+
Billing isn't impacted.
### Microsoft Q&A
-Post questions to the [answers forum](/answers/topics/24223/azure-monitor.html).
+Post questions to the [answers forum](/answers/topics/24223/azure-monitor.html).
azure-monitor Mobile Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/mobile-center-quickstart.md
Title: Monitor mobile apps with Azure Monitor Application Insights
-description: Provides instructions to quickly set up a mobile app for monitoring with Azure Monitor Application Insights and App Center
+ Title: Monitor mobile or universal Windows apps with Azure Monitor Application Insights
+description: Provides instructions to quickly set up a mobile or universal Windows app for monitoring with Azure Monitor Application Insights and App Center
Previously updated : 06/26/2019 Last updated : 07/21/2022 ms.devlang: java, swift
-# Start analyzing your mobile app with App Center and Application Insights
+# Start analyzing your mobile or UWP app with App Center and Application Insights
This tutorial guides you through connecting your app's App Center instance to Application Insights. With Application Insights, you can query, segment, filter, and analyze your telemetry with more powerful tools than are available from the [Analytics](/mobile-center/analytics/) service of App Center.
azure-monitor Change Analysis Custom Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-custom-filters.md
Browsing through a long list of changes in the entire subscription is time consu
The search bar filters the changes according to the input keywords. Search bar results apply only to the changes loaded by the page already and don't pull in results from the server side. ## Next steps-- Use [Change Analysis with the Az.ChangeAnalysis PowerShell module](./change-analysis-powershell.md) to determine changes made to resources in your Azure subscription.-- [Troubleshoot Change Analysis](./change-analysis-troubleshoot.md).
+[Troubleshoot Change Analysis](./change-analysis-troubleshoot.md).
azure-monitor Change Analysis Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-powershell.md
- Title: Azure PowerShell for Change Analysis in Azure Monitor
-description: Learn how to use Azure PowerShell in Azure Monitor's Change Analysis to determine changes to resources in your subscription
---
-ms.contributor: cawa
Previously updated : 04/11/2022--
-ms.reviwer: cawa
--
-# Azure PowerShell for Change Analysis in Azure Monitor (preview)
-
-This article describes how you can use Change Analysis with the
-[Az.ChangeAnalysis PowerShell module](/powershell/module/az.changeanalysis/) to determine changes
-made to resources in your Azure subscription.
-
-> [!CAUTION]
-> Change analysis is currently in public preview. This preview version is provided without a
-> service level agreement. It's not recommended for production workloads. Some features might not be
-> supported or might have constrained capabilities. For more information, see
-> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Prerequisites
-
-If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account
-before you begin.
--
-> [!IMPORTANT]
-> While the **Az.ChangeAnalysis** PowerShell module is in preview, you must install it separately using
-> the `Install-Module` cmdlet.
-
-```azurepowershell-interactive
-Install-Module -Name Az.ChangeAnalysis -Scope CurrentUser -Repository PSGallery
-```
-
-If you have multiple Azure subscriptions, choose the appropriate subscription. Select a specific
-subscription using the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
-
-```azurepowershell-interactive
-Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
-```
-
-## View Azure subscription changes
-
-To view changes made to all resources in your Azure subscription, you use the `Get-AzChangeAnalysis`
-command. You specify the time range for events in UTC date format using the `StartTime` and
-`EndTime` parameters.
-
-```azurepowershell-interactive
-$startDate = Get-Date -Date '2022-04-07T12:09:03.141Z' -AsUTC
-$endDate = Get-Date -Date '2022-04-10T12:09:03.141Z' -AsUTC
-Get-AzChangeAnalysis -StartTime $startDate -EndTime $endDate
-```
-
-## View Azure resource group changes
-
-To view changes made to all resources in a resource group, you use the `Get-AzChangeAnalysis`
-command and specify the `ResourceGroupName` parameter. The following example returns a list of
-changes made within the last 12 hours. Specify `StartTime` and `EndTime` in UTC date formats.
-
-```azurepowershell-interactive
-$startDate = (Get-Date -AsUTC).AddHours(-12)
-$endDate = Get-Date -AsUTC
-Get-AzChangeAnalysis -ResourceGroupName <myResourceGroup> -StartTime $startDate -EndTime $endDate
-```
-
-## View Azure resource changes
-
-To view changes made to a resource, you use the `Get-AzChangeAnalysis` command and specify the
-`ResourceId` parameter. The following example uses PowerShell splatting to return a list of the
-changes made within the last day. Specify `StartTime` and `EndTime` in UTC date formats.
-
-```azurepowershell-interactive
-$Params = @{
- StartTime = (Get-Date -AsUTC).AddDays(-1)
- EndTime = Get-Date -AsUTC
- ResourceId = '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/<myResourceGroup>/providers/Microsoft.Network/networkInterfaces/<myNetworkInterface>'
-}
-Get-AzChangeAnalysis @Params
-```
-
-> [!NOTE]
-> A resource not found message is returned if the specified resource has been removed or deleted.
-> Use Change Analysis at the resource group or subscription level to determine changes for resources
-> that have been removed or deleted.
-
-## View detailed information
-
-You can view more properties for any of the commands shown in this article by piping the results to
-`Select-Object -Property *`.
-
-```azurepowershell-interactive
-$startDate = (Get-Date -AsUTC).AddHours(-12)
-$endDate = Get-Date -AsUTC
-Get-AzChangeAnalysis -ResourceGroupName <myResourceGroup> -StartTime $startDate -EndTime $endDate |
- Select-Object -Property *
-```
-
-The `PropertyChange` property is a complex object that has addition nested properties. Pipe the
-`PropertyChange` property to `Select-Object -Property *` to see the nested properties.
-
-```azurepowershell-interactive
-$startDate = (Get-Date -AsUTC).AddHours(-12)
-$endDate = Get-Date -AsUTC
-(Get-AzChangeAnalysis -ResourceGroupName <myResourceGroup> -StartTime $startDate -EndTime $endDate |
- Select-Object -First 1).PropertyChange | Select-object -Property *
-```
-
-## Next steps
--- Learn how to use [Get-AzChangeAnalysis](/powershell/module/az.changeanalysis/get-azchangeanalysis/)-- Learn how to [use Change Analysis in Azure Monitor](change-analysis.md)-- Learn about [visualizations in Change Analysis](change-analysis-visualizations.md)-- Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
Title: Troubleshoot problems with Azure Application Insights Profiler
-description: This article presents troubleshooting steps and information to help developers enable and use Application Insights Profiler.
+ Title: Troubleshoot the Application Insights Profiler
+description: Walk through troubleshooting steps and information to enable and use Azure Application Insights Profiler.
Previously updated : 08/06/2018- Last updated : 07/21/2022+
-# Troubleshoot problems enabling or viewing Application Insights Profiler
+# Troubleshoot the Application Insights Profiler
-## <a id="troubleshooting"></a>General troubleshooting
+## Make sure you're using the appropriate Profiler Endpoint
-### Make sure you're using the appropriate Profiler Endpoint
-
-Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+Currently, the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
|App Setting | US Government Cloud | China Cloud | |||-| |ApplicationInsightsProfilerEndpoint | `https://profiler.monitor.azure.us` | `https://profiler.monitor.azure.cn` | |ApplicationInsightsEndpoint | `https://dc.applicationinsights.us` | `https://dc.applicationinsights.azure.cn` |
-### Profiles are uploaded only if there are requests to your application while Profiler is running
+## Make sure your app is running on the right versions
-Azure Application Insights Profiler collects data for two minutes each hour. It can also collect data when you select the **Profile Now** button in the **Configure Application Insights Profiler** pane.
+Profiler is supported on the [.NET Framework later than 4.6.2](https://dotnet.microsoft.com/download/dotnet-framework).
-> [!NOTE]
-> The profiling data is uploaded only when it can be attached to a request that happened while Profiler was running.
+If your web app is an ASP.NET Core application, it must be running on the [latest supported ASP.NET Core runtime](https://dotnet.microsoft.com/en-us/download/dotnet/6.0).
-Profiler writes trace messages and custom events to your Application Insights resource. You can use these events to see how Profiler is running:
+## Make sure you're using the right Azure service plan
-1. Search for trace messages and custom events sent by Profiler to your Application Insights resource. You can use this search string to find the relevant data:
+Profiler isn't currently supported on free or shared app service plans. Upgrade to one of the basic plans for Profiler to start working.
- ```
- stopprofiler OR startprofiler OR upload OR ServiceProfilerSample
- ```
- The following image displays two examples of searches from two AI resources:
-
- * At the left, the application isn't receiving requests while Profiler is running. The message explains that the upload was canceled because of no activity.
+## Make sure you're searching for Profiler data within the right timeframe
+
+If the data you're trying to view is older than a couple of weeks, try limiting your time filter and try again. Traces are deleted after seven days.
+
+## Make sure you can access the gateway
+
+Check that proxies or a firewall isn't blocking your access to https://gateway.azureserviceprofiler.net.
- * At the right, Profiler started and sent custom events when it detected requests that happened while Profiler was running. If the `ServiceProfilerSample` custom event is displayed, it means that a profile was captured and its available in the **Application Insights Performance** pane.
+## Make sure the Profiler is running
- If no records are displayed, Profiler isn't running. To troubleshoot, see the troubleshooting sections for your specific app type later in this article.
+Profiling data is uploaded only when it can be attached to a request that happened while Profiler was running. The Profiler collects data for two minutes each hour. You can also trigger the Profiler by [starting a profiling session](./profiler-settings.md#profile-now).
- ![Search Profiler telemetry][profiler-search-telemetry]
+Profiler writes trace messages and custom events to your Application Insights resource. You can use these events to see how Profiler is running.
-### Other things to check
-* Make sure that your app is running on .NET Framework 4.6.
-* If your web app is an ASP.NET Core application, it must be running at least ASP.NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
-* If the data you're trying to view is older than a couple of weeks, try limiting your time filter and try again. Traces are deleted after seven days.
-* Make sure that proxies or a firewall haven't blocked access to https://gateway.azureserviceprofiler.net.
-* Profiler isn't supported on free or shared app service plans. If you're using one of those plans, try scaling up to one of the basic plans and Profiler should start working.
+Search for trace messages and custom events sent by Profiler to your Application Insights resource.
-### <a id="double-counting"></a>Double counting in parallel threads
+1. In your Application Insights resource, select **Search** from the top menu bar.
-In some cases, the total time metric in the stack viewer is more than the duration of the request.
+ :::image type="content" source="./media/profiler-troubleshooting/search-trace-messages.png" alt-text="Screenshot of selecting the search button from the Application Insights resource.":::
-This situation might occur when two or more parallel threads are associated with a request. In that case, the total thread time is more than the elapsed time.
+1. Use the following search string to find the relevant data:
-One thread might be waiting on the other to be completed. The viewer tries to detect this situation and omits the uninteresting wait. In doing so, it errs on the side of displaying too much information rather than omit what might be critical information.
+ ```
+ stopprofiler OR startprofiler OR upload OR ServiceProfilerSample
+ ```
-When you see parallel threads in your traces, determine which threads are waiting so that you can identify the hot path for the request.
+ :::image type="content" source="./media/profiler-troubleshooting/search-results.png" alt-text="Screenshot of the search results from aforementioned search string.":::
+
+ The search results above include two examples of searches from two AI resources:
+
+ - If the application isn't receiving requests while Profiler is running, the message explains that the upload was canceled because of no activity.
+
+ - Profiler started and sent custom events when it detected requests that happened while Profiler was running. If the `ServiceProfilerSample` custom event is displayed, it means that a profile was captured and is available in the **Application Insights Performance** pane.
+
+ If no records are displayed, Profiler isn't running. Make sure you've [enabled Profiler on your Azure service](./profiler.md).
-Usually, the thread that quickly goes into a wait state is simply waiting on the other threads. Concentrate on the other threads, and ignore the time in the waiting threads.
+## Double counting in parallel threads
-### Error report in the profile viewer
-Submit a support ticket in the portal. Be sure to include the correlation ID from the error message.
+When two or more parallel threads are associated with a request, the total time metric in the stack viewer may be more than the duration of the request. In that case, the total thread time is more than the actual elapsed time.
-## Troubleshoot Profiler on Azure App Service
+For example, one thread might be waiting on the other to be completed. The viewer tries to detect this situation and omits the uninteresting wait. In doing so, it errs on the side of displaying too much information, rather than omitting what might be critical information.
-For Profiler to work properly:
-* Your web app service plan must be Basic tier or higher.
-* Your web app must have Application Insights enabled.
-* Your web app must have the following app settings:
+When you see parallel threads in your traces, determine which threads are waiting so that you can identify the hot path for the request. Usually, the thread that quickly goes into a wait state is simply waiting on the other threads. Concentrate on the other threads, and ignore the time in the waiting threads.
- |App Setting | Value |
- ||-|
- |APPINSIGHTS_INSTRUMENTATIONKEY | iKey for your Application Insights resource |
- |APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 |
- |DiagnosticServices_EXTENSION_VERSION | ~3 |
+## Troubleshoot Profiler on your specific Azure service
-* The **ApplicationInsightsProfiler3** webjob must be running. To check the webjob:
- 1. Go to [Kudu](/archive/blogs/cdndevs/the-kudu-debug-console-azure-websites-best-kept-secret).
- 1. In the **Tools** menu, select **WebJobs Dashboard**.
+### Azure App Service
+
+For Profiler to work properly, make sure:
+
+- Your web app has [Application Insights enabled](./profiler.md) with the [right settings](./profiler.md#for-application-insights-and-app-service-in-different-subscriptions)
+
+- The [**ApplicationInsightsProfiler3** WebJob]() is running. To check the webjob:
+ 1. Go to [Kudu](https://github.com/projectkudu/kudu/wiki/Accessing-the-kudu-service). From the Azure portal:
+ 1. In your App Service, select **Advanced Tools** from the left side menu.
+ 1. Select **Go**.
+ 1. In the top menu, select **Tools** > **WebJobs dashboard**.
The **WebJobs** pane opens.
- ![Screenshot shows the WebJobs pane, which displays the name, status, and last run time of jobs.][profiler-webjob]
-
+ :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job.png" alt-text="Screenshot of the WebJobs pane, which displays the name, status, and last run time of jobs.":::
+ 1. To view the details of the webjob, including the log, select the **ApplicationInsightsProfiler3** link. The **Continuous WebJob Details** pane opens.
- ![Screenshot shows the Continuous WebJob Details pane.][profiler-webjob-log]
-
-If Profiler isn't working for you, you can download the log and send it to our team for assistance, serviceprofilerhelp@microsoft.com.
+ :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job-log.png" alt-text="Screenshot of the Continuous WebJob Details pane.":::
-### Check the Diagnostic Services site extension' Status Page
-If Profiler was enabled through the [Application Insights pane](profiler.md) in the portal, it was enabled by the Diagnostic Services site extension.
+If Profiler still isn't working for you, you can download the log and [send it to our team](mailto:serviceprofilerhelp@microsoft.com).
-> [!NOTE]
-> Codeless installation of Application Insights Profiler follows the .NET Core support policy.
-> For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+#### Check the Diagnostic Services site extension' status page
-You can check the Status Page of this extension by going to the following url:
+If Profiler was enabled through the [Application Insights pane](profiler.md) in the portal, it was enabled by the Diagnostic Services site extension. You can check the status page of this extension by going to the following url:
`https://{site-name}.scm.azurewebsites.net/DiagnosticServices` > [!NOTE]
-> The domain of the Status Page link will vary depending on the cloud.
-This domain will be the same as the Kudu management site for App Service.
+> The domain of the status page link will vary depending on the cloud. This domain will be the same as the Kudu management site for App Service.
-This Status Page shows the installation state of the Profiler and Snapshot Collector agents. If there was an unexpected error, it will be displayed and show how to fix it.
+The status page shows the installation state of the Profiler and [Snapshot Debugger](../snapshot-debugger/snapshot-debugger.md) agents. If there was an unexpected error, it will be displayed and show how to fix it.
-You can use the Kudu management site for App Service to get the base url of this Status Page:
+You can use the Kudu management site for App Service to get the base url of this status page:
1. Open your App Service application in the Azure portal.
-2. Select **Advanced Tools**, or search for **Kudu**.
+2. Select **Advanced Tools**.
3. Select **Go**.
-4. Once you are on the Kudu management site, in the URL, **append the following `/DiagnosticServices` and press enter**.
- It will end like this: `https://<kudu-url>/DiagnosticServices`
+4. Once you are on the Kudu management site:
+ 1. Append `/DiagnosticServices` to the URL.
+ 1. Press enter.
+
+It will end like this: `https://<kudu-url>/DiagnosticServices`.
+
+It will display a status page similar to:
-It will display a Status Page similar like the below:
![Diagnostic Services Status Page](../app/media/diagnostic-services-site-extension/status-page.png)
-
-### Manual installation
-When you configure Profiler, updates are made to the web app's settings. If your environment requires it, you can apply the updates manually. An example might be that your application is running in a Web Apps environment for Power Apps. To apply updates manually:
+> [!NOTE]
+> Codeless installation of Application Insights Profiler follows the .NET Core support policy. For more information about supported runtimes, see [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
-1. In the **Web App Control** pane, open **Settings**.
+#### Manual installation
-1. Set **.NET Framework version** to **v4.6**.
+When you configure Profiler, updates are made to the web app's settings. If necessary, you can [apply the updates manually](./profiler.md#verify-always-on-setting-is-enabled).
-1. Set **Always On** to **On**.
-1. Create these app settings:
+#### Too many active profiling sessions
- |App Setting | Value |
- ||-|
- |APPINSIGHTS_INSTRUMENTATIONKEY | iKey for your Application Insights resource |
- |APPINSIGHTS_PROFILERFEATURE_VERSION | 1.0.0 |
- |DiagnosticServices_EXTENSION_VERSION | ~3 |
+You can enable Profiler on a maximum of four Web Apps that are running in the same service plan. If you've more than four, Profiler might throw the following error:
-### Too many active profiling sessions
+*Microsoft.ServiceProfiler.Exceptions.TooManyETWSessionException*.
-You can enable Profiler on a maximum of four Web Apps that are running in the same service plan. If you've more than four, Profiler might throw a *Microsoft.ServiceProfiler.Exceptions.TooManyETWSessionException*. To solve it, move some web apps to a different service plan.
+To solve it, move some web apps to a different service plan.
-### Deployment error: Directory Not Empty 'D:\\home\\site\\wwwroot\\App_Data\\jobs'
+#### Deployment error: Directory Not Empty 'D:\\home\\site\\wwwroot\\App_Data\\jobs'
If you're redeploying your web app to a Web Apps resource with Profiler enabled, you might see the following message: *Directory Not Empty 'D:\\home\\site\\wwwroot\\App_Data\\jobs'*
-This error occurs if you run Web Deploy from scripts or from the Azure Pipelines. The solution is to add the following deployment parameters to the Web Deploy task:
+This error occurs if you run Web Deploy from scripts or from the Azure Pipelines. Resolve by adding the following deployment parameters to the Web Deploy task:
``` -skip:Directory='.*\\App_Data\\jobs\\continuous\\ApplicationInsightsProfiler.*' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs\\continuous$' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data\\jobs$' -skip:skipAction=Delete,objectname='dirPath',absolutepath='.*\\App_Data$' ```
-These parameters delete the folder that's used by Application Insights Profiler and unblock the redeploy process. They don't affect the Profiler instance that's currently running.
+These parameters delete the folder used by Application Insights Profiler and unblock the redeploy process. They don't affect the Profiler instance that's currently running.
-### How do I determine whether Application Insights Profiler is running?
+#### Is the Profiler running?
Profiler runs as a continuous webjob in the web app. You can open the web app resource in the [Azure portal](https://portal.azure.com). In the **WebJobs** pane, check the status of **ApplicationInsightsProfiler**. If it isn't running, open **Logs** to get more information.
-## Troubleshoot VMs and Cloud Services
-
->**The bug in the profiler that ships in the WAD for Cloud Services has been fixed.** The latest version of WAD (1.12.2.0) for Cloud Services works with all recent versions of the App Insights SDK. Cloud Service hosts will upgrade WAD automatically, but it isn't immediate. To force an upgrade, you can redeploy your service or reboot the node.
+### VMs and Cloud Services
-To see whether Profiler is configured correctly by Azure Diagnostics, follow the below steps:
+To see whether Profiler is configured correctly by Azure Diagnostics:
+
1. Verify that the content of the Azure Diagnostics configuration deployed is what you expect.
-1. Second, make sure that Azure Diagnostics passes the proper iKey on the Profiler command line.
+1. Make sure the Azure Diagnostics passes the proper iKey on the Profiler command line.
-1. Third, check the Profiler log file to see whether Profiler ran but returned an error.
+1. Check the Profiler log file to see whether Profiler ran but returned an error.
To check the settings that were used to configure Azure Diagnostics:
-1. Sign in to the virtual machine (VM), and then open the log file at this location. The plugin version may be newer on your machine.
+1. Sign in to the virtual machine (VM).
+
+1. Open the log file at this location. The plugin version may be newer on your machine.
For VMs: ```
To check the settings that were used to configure Azure Diagnostics:
c:\logs\Plugins\Microsoft.Azure.Diagnostics.PaaSDiagnostics\1.11.3.12\DiagnosticsPlugin.log ```
-1. In the file, you can search for the string **WadCfg** to find the settings that were passed to the VM to configure Azure Diagnostics. You can check to see whether the iKey used by the Profiler sink is correct.
+1. In the file, search for the string `WadCfg` to find the settings that were passed to the VM to configure Azure Diagnostics.
+
+1. Check to see whether the iKey used by the Profiler sink is correct.
-1. Check the command line that's used to start Profiler. The arguments that are used to launch Profiler are in the following file. (The drive could be c: or d: and the directory may be hidden.)
+1. Check the command line that's used to start Profiler. The arguments that are used to launch Profiler are in the following file (the drive could be `c:` or `d:` and the directory may be hidden):
For VMs: ```
To check the settings that were used to configure Azure Diagnostics:
1. Make sure that the iKey on the Profiler command line is correct.
-1. Using the path found in the preceding *config.json* file, check the Profiler log file, called **BootstrapN.log**. It displays the debug information that indicates the settings that Profiler is using. It also displays status and error messages from Profiler.
+1. Using the path found in the preceding *config.json* file, check the Profiler log file, called `BootstrapN.log`. It displays:
+ - The debug information that indicates the settings that Profiler is using.
+ - Status and error messages from Profiler.
- For VMs, the file is here:
+ You can find the file:
+
+ For VMs:
``` C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.17.0.6\ApplicationInsightsProfiler ```
To check the settings that were used to configure Azure Diagnostics:
C:\Logs\Plugins\Microsoft.Azure.Diagnostics.IaaSDiagnostics\1.17.0.6\ApplicationInsightsProfiler ```
- If Profiler is running while your application is receiving requests, the following message is displayed: *Activity detected from iKey*.
+1. If Profiler is running while your application is receiving requests, the following message is displayed: *Activity detected from iKey*.
- When the trace is being uploaded, the following message is displayed: *Start to upload trace*.
+1. When the trace is being uploaded, the following message is displayed: *Start to upload trace*.
-
-## Edit network proxy or firewall rules
+### Edit network proxy or firewall rules
If your application connects to the Internet via a proxy or a firewall, you may need to update the rules to communicate with the Profiler service. The IPs used by Application Insights Profiler are included in the Azure Monitor service tag. For more information, see [Service Tags documentation](../../virtual-network/service-tags-overview.md).
+## If all else fails...
+
+Submit a support ticket in the Azure portal. Include the correlation ID from the error message.
+
-[profiler-search-telemetry]:./media/profiler-troubleshooting/Profiler-Search-Telemetry.png
-[profiler-webjob]:./media/profiler-troubleshooting/profiler-web-job.png
-[profiler-webjob-log]:./media/profiler-troubleshooting/profiler-web-job-log.png
azure-relay Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/authenticate-application.md
Title: Authenticate from an application - Azure Relay (Preview)
+ Title: Authenticate from an application - Azure Relay
description: This article provides information about authenticating an application with Azure Active Directory to access Azure Relay resources. Previously updated : 06/21/2022 Last updated : 07/22/2022
-# Authenticate and authorize an application with Azure Active Directory to access Azure Relay entities (Preview)
+# Authenticate and authorize an application with Azure Active Directory to access Azure Relay entities
Azure Relay supports using Azure Active Directory (Azure AD) to authorize requests to Azure Relay entities (Hybrid Connections, WCF Relays). With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant permissions to a security principal, which may be a user, group, or application service principal. To learn more about roles and role assignments, see [Understanding the different roles](../role-based-access-control/overview.md).
+> [!NOTE]
+> This feature is generally available in all regions except Microsoft Azure operated by 21Vianet (Azure China).
+ [!INCLUDE [relay-roles](./includes/relay-roles.md)]
Here's the code from the sample that shows how to use Azure AD authentication to
var sender = new HybridConnectionClient(hybridConnectionUri, tokenProvider); ```
+## Samples
+
+- Hybrid Connections: [.NET](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/dotnet/rolebasedaccesscontrol), [Java](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/java/role-based-access-control), [JavaScript](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/node/rolebasedaccesscontrol)
+- WCF Relay: [.NET](https://github.com/Azure/azure-relay/tree/master/samples/wcf-relay/RoleBasedAccessControl)
+
+
## Next steps - To learn more about Azure RBAC, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)? - To learn how to assign and manage Azure role assignments with Azure PowerShell, Azure CLI, or the REST API, see these articles:
azure-relay Authenticate Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/authenticate-managed-identity.md
Title: Authenticate with managed identities for Azure Relay resources (preview)
+ Title: Authenticate with managed identities for Azure Relay resources
description: This article describes how to use managed identities to access with Azure Relay resources. Previously updated : 06/21/2022 Last updated : 07/22/2022
-# Authenticate a managed identity with Azure Active Directory to access Azure Relay resources (preview)
+# Authenticate a managed identity with Azure Active Directory to access Azure Relay resources
[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) is a cross-Azure feature that enables you to create a secure identity associated with the deployment under which your application code runs. You can then associate that identity with access-control roles that grant custom permissions for accessing specific Azure resources that your application needs. With managed identities, the Azure platform manages this runtime identity. You don't need to store and protect access keys in your application code or configuration, either for the identity itself, or for the resources you need to access. A Relay client app running inside an Azure App Service application or in a virtual machine with enabled managed entities for Azure resources support doesn't need to handle SAS rules and keys, or any other access tokens. The client app only needs the endpoint address of the Relay namespace. When the app connects, Relay binds the managed entity's context to the client in an operation that is shown in an example later in this article. Once it's associated with a managed identity, your Relay client can do all authorized operations. Authorization is granted by associating a managed entity with Relay roles.
+> [!NOTE]
+> This feature is generally available in all regions except Microsoft Azure operated by 21Vianet (Azure China).
+ [!INCLUDE [relay-roles](./includes/relay-roles.md)] ## Enable managed identity
Here's the code from the sample that shows how to use Azure AD authentication to
var sender = new HybridConnectionClient(hybridConnectionUri, tokenProvider); ```
+## Samples
+
+- Hybrid Connections: [.NET](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/dotnet/rolebasedaccesscontrol), [Java](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/java/role-based-access-control), [JavaScript](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/node/rolebasedaccesscontrol)
+- WCF Relay: [.NET](https://github.com/Azure/azure-relay/tree/master/samples/wcf-relay/RoleBasedAccessControl)
+ ## Next steps To learn more about Azure Relay, see the following topics. - [What is Relay?](relay-what-is-it.md)
azure-relay Relay Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-authentication-and-authorization.md
Title: Azure Relay authentication and authorization | Microsoft Docs description: This article provides an overview of Shared Access Signature (SAS) authentication with the Azure Relay service. Previously updated : 06/21/2022 Last updated : 07/22/2022 # Azure Relay authentication and authorization There are two ways to authenticate and authorize access to Azure Relay resources: Azure Active Directory (Azure AD) and Shared Access Signatures (SAS). This article gives you details on using these two types of security mechanisms.
-## Azure Active Directory (Preview)
+## Azure Active Directory
Azure AD integration for Azure Relay resources provides Azure role-based access control (Azure RBAC) for fine-grained control over a clientΓÇÖs access to resources. You can use Azure RBAC to grant permissions to a security principal, which may be a user, a group, or an application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can be used to authorize a request to access an Azure Relay resource. For more information about authenticating with Azure AD, see the following articles:
To access an entity, the client requires a SAS token generated using a specific
SAS authentication support for Azure Relay is included in the Azure .NET SDK versions 2.0 and later. SAS includes support for a [SharedAccessAuthorizationRule](/dotnet/api/microsoft.servicebus.messaging.sharedaccessauthorizationrule). All APIs that accept a connection string as a parameter include support for SAS connection strings.
+## Samples
+
+- Hybrid Connections: [.NET](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/dotnet/rolebasedaccesscontrol), [Java](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/java/role-based-access-control), [JavaScript](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections/node/rolebasedaccesscontrol)
+- WCF Relay: [.NET](https://github.com/Azure/azure-relay/tree/master/samples/wcf-relay/RoleBasedAccessControl)
+ ## Next steps - Continue reading [Service Bus authentication with Shared Access Signatures](../service-bus-messaging/service-bus-sas.md) for more details about SAS.
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
"use-protectedsettings-for-commandtoexecute-secrets": { "level": "warning" },
+ "use-stable-resource-identifier": {
+ "level": "warning"
+ }
"use-stable-vm-image": { "level": "warning" }
azure-resource-manager Linter Rule Outputs Should Not Contain Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-outputs-should-not-contain-secrets.md
output notAPassword string = '...'
``` It is good practice to add a comment explaining why the rule does not apply to this line.+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter Rule Use Protectedsettings For Commandtoexecute Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md
resource customScriptExtension 'Microsoft.HybridCompute/machines/extensions@2019
} } }
-```
+```
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter Rule Use Stable Resource Identifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-stable-resource-identifier.md
+
+ Title: Linter rule - use stable resource identifier
+description: Linter rule - use stable resource identifier
+ Last updated : 07/21/2022++
+# Linter rule - use stable resource identifier
+
+Resource name shouldn't use a non-deterministic value. For example, [`newGuid()`](./bicep-functions-string.md#newguid) or [`utcNow()`](./bicep-functions-date.md#utcnow) can't be used in resource name; resource name can't contains a parameter/variable whose default value uses [`newGuid()`](./bicep-functions-string.md#newguid) or [`utcNow()`](./bicep-functions-date.md#utcnow).
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`use-stable-resource-identifiers`
+
+## Solution
+
+The following example fails this test because `utcNow()` is used in the resource name.
+
+```bicep
+param location string = resourceGroup().location
+
+resource sa 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+ name: 'store${toLower(utcNow())}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ accessTier: 'Hot'
+ }
+}
+```
+
+You can fix it by removing the `utcNow()` function from the example.
+
+```bicep
+param location string = resourceGroup().location
+
+resource sa 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+ name: 'store${uniqueString(resourceGroup().id)}'
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ accessTier: 'Hot'
+ }
+}
+```
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter Rule Use Stable Vm Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-use-stable-vm-image.md
resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = {
} } ```+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 07/21/2022 Last updated : 07/22/2022 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [secure-parameter-default](./linter-rule-secure-parameter-default.md) - [simplify-interpolation](./linter-rule-simplify-interpolation.md) - [use-protectedsettings-for-commandtoexecute-secrets](./linter-rule-use-protectedsettings-for-commandtoexecute-secrets.md)
+- [use-stable-resource-identifier](./linter-rule-use-stable-resource-identifier.md)
- [use-stable-vm-image](./linter-rule-use-stable-vm-image.md) You can customize how the linter rules are applied. To overwrite the default settings, add a **bicepconfig.json** file and apply custom settings. For more information about applying those settings, see [Add custom settings in the Bicep config file](bicep-config-linter.md).
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
Last updated 09/17/2021
# Run command in Azure VMware Solution
-In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#view-the-vcenter-privileges) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
+In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
Azure VMware Solution supports the following operations:
backup Sap Hana Db Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-db-manage.md
Title: Manage backed up SAP HANA databases on Azure VMs description: In this article, learn common tasks for managing and monitoring SAP HANA databases that are running on Azure virtual machines. Previously updated : 02/09/2022 Last updated : 07/22/2022
Backups run in accordance with the policy schedule. You can run a backup on-dema
1. In **Backup center**, select **Backup Instances** from the menu. 2. Select **SAP HANA in Azure VM** as the datasource type, select the VM running the SAP HANA database, anc then click **Backup now**.
-3. In **Backup Now**, choose the type of backup you want to perform. Then select **OK**. This backup will be retained according to the policy associated with this backup item.
-4. Monitor the portal notifications. To monitor the job progress, go to **Backup center** -> **Backup Jobs** and filter for jobs with status **In Progress**. Depending on the size of your database, creating the initial backup may take a while.
+3. In **Backup Now**, choose the type of backup you want to perform. Then select **OK**. This backup will be retained for 45 days.
-By default, the retention of on-demand backups is 45 days.
+ By default, the retention of on-demand backups is set to 45 days.
+
+1. Monitor the portal notifications. To monitor the job progress, go to **Backup center** -> **Backup Jobs** and filter for jobs with status **In Progress**. Depending on the size of your database, creating the initial backup may take a while.
### HANA native client integration
backup Tutorial Sap Hana Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-backup-cli.md
Title: Tutorial - SAP HANA DB backup on Azure using Azure CLI description: In this tutorial, learn how to back up SAP HANA databases running on an Azure VM to an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 07/05/2022 Last updated : 07/22/2022
To get container name, run the following command. [Learn about this CLI command]
## Trigger an on-demand backup
-While the section above details how to configure a scheduled backup, this section talks about triggering an on-demand backup. To do this, we use the [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now) cmdlet.
+While the section above details how to configure a scheduled backup, this section talks about triggering an on-demand backup. To do this, we use the [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now) command.
>[!NOTE]
-> The retention policy of an on-demand backup is determined by the underlying retention policy for the database.
+> By default, the retention of on-demand backups is set to 45 days.
```azurecli-interactive az backup protection backup-now --resource-group saphanaResourceGroup \
cognitive-services Batch Anomaly Detection Powerbi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/batch-anomaly-detection-powerbi.md
Invoke the query on your data sheet by selecting `Sheet1` below **Enter Paramete
![An image of the invoke function](../media/tutorials/invoke-function-screenshot.png)
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
+ ## Data source privacy and authentication > [!NOTE]
cognitive-services Learn Multivariate Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/tutorials/learn-multivariate-anomaly-detection.md
except Exception as e:
Response code `201` indicates a successful request.
+> [!IMPORTANT]
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../key-vault/general/overview.md). See the Cognitive Services [security](../../cognitive-services-security.md) article for more information.
+ [!INCLUDE [mvad-input-params](../includes/mvad-input-params.md)] ## 4. Get model status
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
To improve accuracy, customization is available for some languages and base mode
| Bulgarian (Bulgaria) | `bg-BG` | | Burmese (Myanmar) | `my-MM` | | Catalan (Spain) | `ca-ES` |
+| Chinese (Cantonese, Simplified) | `yue-CN` |
| Chinese (Cantonese, Traditional) | `zh-HK` | | Chinese (Mandarin, Simplified) | `zh-CN` |
-| Chinese (Taiwanese Mandarin) | `zh-TW` |
+| Chinese (Southwestern Mandarin, Simplified) | `zh-CN-sichuan` |
+| Chinese (Taiwanese Mandarin, Traditional) | `zh-TW` |
+| Chinese (Wu, Simplified) | `wuu-CN` |
| Croatian (Croatia) | `hr-HR` | | Czech (Czech) | `cs-CZ` | | Danish (Denmark) | `da-DK` |
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Sometimes text-to-speech can't accurately pronounce a word. Examples might be th
The custom lexicon currently supports UTF-8 encoding. > [!NOTE]
-> At this time, the custom lexicon isn't supported for five voices: et-EE-AnuNeural, ga-IE-OrlaNeural, lt-LT-OnaNeural, lv-LV-EveritaNeural, and mt-MT-GarceNeural.
-
+> Custom lexicon feature may not work for some new locales.
**Syntax**
cognitive-services Cognitive Services Support Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-support-options.md
If you can't find an answer to your problem using search, submit a new question
* [Anomaly Detector](/answers/topics/azure-anomaly-detector.html) * [Content Moderator](/answers/topics/azure-content-moderator.html)
-* [Metrics Advisor (preview)]()
+* [Metrics Advisor](/answers/topics/148981/azure-metrics-advisor.html)
* [Personalizer](/answers/topics/azure-personalizer.html) **Azure OpenAI**
If you do submit a new question to Stack Overflow, please use one or more of the
* [Anomaly Detector](https://stackoverflow.com/search?q=azure+anomaly+detector) * [Content Moderator](https://stackoverflow.com/search?q=azure+content+moderator)
-* [Metrics Advisor (preview)](https://stackoverflow.com/search?q=azure+metrics+advisor)
+* [Metrics Advisor](https://stackoverflow.com/search?q=azure+metrics+advisor)
* [Personalizer](https://stackoverflow.com/search?q=azure+personalizer) **Azure OpenAI**
To request new features, post them on https://feedback.azure.com. Share your ide
* [Anomaly Detector](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858) * [Content Moderator](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858)
-* [Metrics Advisor (preview)](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858)
+* [Metrics Advisor](https://feedback.azure.com/d365community/search/?q=%22Metrics+Advisor%22)
* [Personalizer](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858?c=6c8853b4-0b25-ec11-b6e6-000d3a4f0858) ## Stay informed
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-endpoint.md
You can use Azure Communication Services and Graph API to integrate communication as Teams user into your products to communicate with other people in and outside your organization. With Azure Communication Services supporting Teams identities and Graph API, you can customize a voice, video, chat, and screen-sharing experience for Teams users.
-You can use the Azure Communication Services Identity SDK to exchange Azure Active Directory (Azure AD) access tokens of Teams users for Communication Identity access tokens. The diagrams in the next sections demonstrate multitenant use cases, where fictional company Fabrikam is the customer of fictional company Contoso.
+You can use the Azure Communication Services Identity SDK to exchange Azure Active Directory (Azure AD) access tokens of Teams users for Communication Identity access tokens. The diagrams in the next sections demonstrate multitenant use cases, where fictional company Fabrikam is the customer of fictional company Contoso. Contoso builds multi-tenant SaaS product that Fabrikam's administrator purchases for its employees.
## Calling
-Voice, video, and screen-sharing capabilities are provided via Azure Communication Services Calling SDKs. The following diagram shows an overview of the process you'll follow as you integrate your calling experiences with Azure Communication Services support Teams identities.
+Voice, video, and screen-sharing capabilities are provided via [Azure Communication Services Calling SDKs](./interop/teams-user-calling.md). The following diagram shows an overview of the process you'll follow as you integrate your calling experiences with Azure Communication Services support Teams identities. You can learn more details about the [authentication](./interop/custom-teams-endpoint-authentication-overview.md) and [used packages](../quickstarts/manage-teams-identity.md).
![Diagram of the process to integrate the calling capabilities into your product with Azure Communication Services.](./media/teams-identities/teams-identity-calling-overview.svg)
Find more details in [Azure Active Directory documentation](../../active-directo
## Next steps > [!div class="nextstepaction"]
+> [Check use cases for communication as a Teams user](./interop/custom-teams-endpoint-use-cases.md)
> [Issue a Teams access token](../quickstarts/manage-teams-identity.md)
+> [Start a call with Teams user as a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
-Learn about [Teams interoperability](./teams-interop.md).
+Learn about [Teams interoperability](./teams-interop.md).
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md
Azure Communication Services can be used to build custom applications and experi
Azure Communication Services supports two types of Teams interoperability depending on the identity of the user: -- **[Bring your own identity (BYOI)](#bring-your-own-identity).** You control user authentication and users of your custom applications don't need to have Azure Active Directory identities or Teams licenses. This model allows you to build custom applications for non-Teams users to connect and communicate with Teams users.
+- **[Guest/Bring your own identity (BYOI)](#guestbring-your-own-identity).** You control user authentication and users of your custom applications don't need to have Azure Active Directory identities or Teams licenses. This model allows you to build custom applications for non-Teams users to connect and communicate with Teams users.
- **[Teams identity](#teams-identity).** User authentication is controlled by Azure Active Directory and users of your custom application must have Teams licenses. This model allows you to build custom applications for Teams users to enable specialized workflows or experiences that are not possible with the existing Teams clients. Applications can implement both authentication models and leave the choice of authentication up to the user. The following table compares two models:
Applications can implement both authentication models and leave the choice of au
\* Server logic issuing access tokens can perform any custom authentication and authorization of the request.
-## Bring your own identity
+## Guest/Bring your own identity
The bring your own identity (BYOI) authentication model allows you to build custom applications for non-Teams users to connect and communicate with Teams users. You control user authentication and users of your custom applications don't need to have Azure Active Directory identities or Teams licenses. The first scenario that has been enabled allows users of your application to join Microsoft Teams meetings as external accounts, similar to [anonymous users that join meetings](/microsoftteams/meeting-settings-in-teams#allow-anonymous-users-to-join-meetings) using the Teams web application. This is ideal for business-to-consumer applications that bring together employees (familiar with Teams) and external users (using a custom application) into a meeting experience. In the future, we will be enabling additional scenarios including direct calling and chat which will allow your application to initiate calls and chats with Teams users outside the context of a Teams meeting.
It is currently not possible for a Teams user to join a call that was initiated
## Teams identity
-Developers can use Communication Services Calling SDK with Teams identity to build custom applications for Teams users. Custom applications can enable specialized workflows for Teams users such as management of incoming and outgoing PSTN calls or bring Teams calling experience into devices that are not supported with the standard Teams client. Teams identities are authenticated by Azure Active Directory, and all attributes and details about the user are bound to their Azure Active Directory account.
+Developers can use [Communication Services Calling SDK with Teams identity](./interop/teams-user-calling.md) to build custom applications for Teams users. Custom applications can enable specialized workflows for Teams users such as management of incoming and outgoing PSTN calls or bring Teams calling experience into devices that are not supported with the standard Teams client. Teams identities are authenticated by Azure Active Directory, and all attributes and details about the user are bound to their Azure Active Directory account.
When a Communication Services endpoint connects to a Teams meeting or Teams call using a Teams identity, the endpoint is treated like a Teams user with a Teams client and the experience is driven by policies assigned to users within and outside of the organization. Teams users can join Teams meetings, place calls to other Teams users, receive calls from phone numbers, transfer an ongoing call to the Teams call queue or share screen.
There are several ways that users can join a Teams meeting:
- Via Teams clients as authenticated **Teams users**. This includes the desktop, mobile, and web Teams clients. - Via Teams clients as unauthenticated **Anonymous users**. -- Via custom Communication Services applications as **BYOI users** using the bring your own identity authentication model.
+- Via custom Communication Services applications as **Guest/BYOI users** using the bring your own identity authentication model.
- Via custom Communication Services applications as **Teams users** using the Teams identity authentication model. ![Overview of multiple interoperability scenarios within Azure Communication Services](./media/teams-identities/teams-interop-overview-v2.png)
Interoperability between Azure Communication Services and Microsoft Teams enable
Microsoft will indicate to you via the Azure Communication Services API that recording or transcription has commenced and you must communicate this fact, in real-time, to your users within your application's user interface. You agree to indemnify Microsoft for all costs and damages incurred as a result of your failure to comply with this obligation. ## Pricing
-All usage of Azure Communication Service APIs and SDKs increments [Azure Communication Service billing meters](https://azure.microsoft.com/pricing/details/communication-services/). Interactions with Microsoft Teams, such as joining a meeting or initiating a phone call using a Teams allocated number, will increment these meters but there is no additional fee for the Teams interoperability capability itself, and there is no pricing distinction between the BYOI and Microsoft 365 authentication options.
+All usage of Azure Communication Service APIs and SDKs increments [Azure Communication Service billing meters](https://azure.microsoft.com/pricing/details/communication-services/). Interactions with Microsoft Teams, such as joining a meeting or initiating a phone call using a Teams allocated number, will increment these meters but there is no additional fee for the Teams interoperability capability itself, and there is no pricing distinction between the Guest/BYOI and Microsoft 365 authentication options.
If your Azure application has a user spend 10 minutes in a meeting with a user of Microsoft Teams, those two users combined consumed 20 calling minutes. The 10 minutes exercised through the custom application and using Azure APIs and SDKs will be billed to your resource. However, the 10 minutes consumed by the user in the native Teams application is covered by the applicable Teams license and is not metered by Azure.
Azure Communication Services interoperability isn't compatible with Teams deploy
## Next steps > [!div class="nextstepaction"]
-> [Enable Teams access tokens](../quickstarts/manage-teams-identity.md)
+> [Get access tokens for Guest/BYOI](../quickstarts/access-tokens.md)
+> [Join Teams meeting call as a Guest/BYOI](../quickstarts/voice-video-calling/get-started-teams-interop.md)
+> [Join Teams meeting chat as a Guest/BYOI](../quickstarts/chat/meeting-interop.md)
+> [Get access tokens for Teams users](../quickstarts/manage-teams-identity.md)
+> [Make a call as a Teams users to a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
In this quickstart, you learned how to:
> * Use the Microsoft Authentication Library (MSAL) to issue an Azure AD user token. > * Use the Communication Services Identity SDK to exchange the Azure AD user token for an access token of Teams user. +
+> [!div class="nextstepaction"]
+> [Make a call as a Teams users to a Teams user](../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
+> [Check use cases for communication as a Teams user](../concepts/interop/custom-teams-endpoint-use-cases.md)
+ Learn about the following concepts: - [Azure Communication Services support Teams identities](../concepts/teams-endpoint.md) - [Teams interoperability](../concepts/teams-interop.md)++
communication-services Define Media Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/media-composition/define-media-composition.md
+
+ Title: Quickstart - Introducing the Media Composition inputs, layouts, and outputs
+
+description: In this quickstart, you'll learn about the different Media Composition inputs, layouts, and outputs.
+++++ Last updated : 7/15/2022++++
+# Quickstart: Introducing the Media Composition inputs, layouts, and outputs
+
+Azure Communication Services Media Composition is made up of three parts: inputs, layouts, and outputs. Follow this document to learn more about the options available and how to define each of the parts.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).
+
+## Inputs
+To retrieve the media sources that will be used in the layout composition, you'll need to define inputs. Inputs can be either multi-source or single source.
+
+### Multi-source inputs
+Teams meetings, ACS calls and ACS Rooms are usually made up of multiple individuals. We define these as multi-source inputs. They can be used in layouts as a single input or destructured to reference a single participant.
+
+ACS Group Call json:
+```json
+{
+ "inputs": {
+ "groupCallInput": {
+ "kind": "groupCall",
+ "id": "5a22165a-f952-4a56-8009-6d39b8868971"
+ }
+ }
+}
+```
+
+Teams Meeting Input json:
+```json
+{
+ "inputs": {
+ "teamsMeetingInput": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ }
+ }
+}
+```
+
+ACS Rooms Input json:
+```json
+{
+ "inputs": {
+ "roomCallInput": {
+ "kind": "room",
+ "id": "050298"
+ }
+ }
+}
+```
+
+### Single source inputs
+Unlike multi-source inputs, single source inputs reference a single media source. If the single source input is from a multi-source input such as an ACS group call or Teams meeting, it will reference the multi-source input's ID in the `call` property. The following are examples of single source inputs:
+
+Participant json:
+```json
+{
+ "inputs": {
+ "teamsMeetingInput": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ },
+ "participantInput": {
+ "kind": "participant",
+ "call": "teamsMeetingInput",
+ "id": {
+ "microsoftTeamsUser": {
+ "id": "f3ba9014-6dca-4456-8ec0-fa03cfa2b7b7"
+ }
+ }
+ }
+ }
+}
+```
+
+Active Presenter json:
+```json
+{
+ "inputs": {
+ "teamsMeetingInput": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ },
+ "activePresenterInput": {
+ "kind": "activePresenter",
+ "call": "teamsMeetingInput"
+ }
+ }
+}
+```
+
+Dominant Speaker json:
+```json
+{
+ "inputs": {
+ "teamsMeetingInput": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ },
+ "dominantSpeakerInput": {
+ "kind": "dominantSpeaker",
+ "call": "teamsMeetingInput"
+ }
+ }
+}
+```
+
+## Layouts
+Media Composition supports several layouts. These include grid, auto grid, presentation, presenter, and custom.
+
+### Grid
+The grid layout will compose the specified media sources into a grid with a constant number of cells. You can customize the number of rows and columns in the grid as well as specify the media source that should be place in each cell of the grid.
+
+Sample grid layout json:
+```json
+{
+ "layout": {
+ "kind": "grid",
+ "rows": 2,
+ "columns": 2,
+ "inputIds": [
+ ["active", "jill"],
+ ["jon", "janet"]
+ ]
+ },
+ "inputs": {
+ "meeting": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ },
+ "active": {
+ "kind": "dominantSpeaker",
+ "call": "meeting"
+ },
+ "jill": {
+ "kind": "participant",
+ "call": "meeting",
+ "placeholderImageUri": "https://imageendpoint",
+ "id": {
+ "microsoftTeamsUser": {
+ "id": "f3ba9014-6dca-4456-8ec0-fa03cfa2b7f4"
+ }
+ }
+ },
+ "jon": {
+ "kind": "participant",
+ "call": "meeting",
+ "placeholderImageUri": "https://imageendpoint",
+ "id": {
+ "microsoftTeamsUser": {
+ "id": "36f49257-c7de-4d64-97f5-e507bdb3323e"
+ }
+ }
+ },
+ "janet": {
+ "kind": "participant",
+ "call": "meeting",
+ "placeholderImageUri": "https://imageendpoint",
+ "id": {
+ "microsoftTeamsUser": {
+ "id": "e94d0030-ac38-4111-a87f-07884b565b14"
+ }
+ }
+ }
+ },
+ "outputs": {
+ "meeting": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ }
+ }
+}
+```
+The sample grid layout json will take the dominant speaker and put it in the first cell. Then, `jill`, `jon`, `janet` will fill the next three cells:
+
+If only three participants are defined in the inputs, then the fourth cell will be left blank.
+
+### Auto grid
+The auto grid layout is ideal for a multi-source scenario where you want to display all sources in the scene. This layout should be the default multi-source scene and would adjust based on the number of sources.
+
+Sample auto grid layout json:
+```json
+{
+ "layout": {
+ "kind": "autoGrid",
+ "inputIds": ["meeting"],
+ "highlightDominantSpeaker": true
+ },
+ "inputs": {
+ "meeting": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ }
+ },
+ "outputs": {
+ "meeting": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ }
+ }
+}
+```
+The sample auto grid layout will take all the media sources in the `meeting` input and compose them into an optimized grid:
+
+### Presentation
+The presentation layout features the presenter that covers the majority of the scene. The other sources are the audience members and are arranged in either a row or column in the remaining space. The position of the audience can be one of: `top`, `bottom`, `left`, or `right`.
+
+Sample presentation layout json:
+```json
+{
+ "layout": {
+ "kind": "presentation",
+ "presenterId": "presenter",
+ "audienceIds": ["meeting:not('presenter')"],
+ "audiencePosition": "top"
+ },
+ "inputs": {
+ "meeting": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ },
+ "presenter": {
+ "kind": "participant",
+ "call": "meeting",
+ "placeholderImageUri": "https://imageendpoint",
+ "id": {
+ "microsoftTeamsUser": {
+ "id": "f3ba9014-6dca-4456-8ec0-fa03cfa2b7f4"
+ }
+ }
+ }
+ },
+ "outputs": {
+ "meeting": {
+ "teamsMeeting": {
+ "teamsJoinUrl": "https://teamsjoinurl"
+ }
+ }
+ }
+}
+```
+
+The sample presentation layout will feature the `presenter` and place the rest of the audience members at the top of the scene:
+
+### Presenter
+The presenter layout is a picture-in-picture layout composed of two inputs. One source is the background of the scene. This represents the content being presented or the main presenter. The secondary source is the support and is cropped and positioned at a corner of the scene. The support position can be one of: `bottomLeft`, `bottomRight`, `topLeft`, or `topRight`.
+
+Sample presenter layout json:
+```json
+{
+ "layout": {
+ "kind": "presenter",
+ "presenterId": "presenter",
+ "supportId": "support",
+ "supportPosition": "topLeft",
+ "supportAspectRatio": 3/2
+ },
+ "inputs": {
+ "meeting": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ },
+ "presenter": {
+ "kind": "participant",
+ "call": "meeting",
+ "placeholderImageUri": "https://imageendpoint",
+ "id": {
+ "microsoftTeamsUser": {
+ "id": "f3ba9014-6dca-4456-8ec0-fa03cfa2b7f4"
+ }
+ }
+ },
+ "support": {
+ "kind": "participant",
+ "call": "meeting",
+ "placeholderImageUri": "https://imageendpoint",
+ "id": {
+ "microsoftTeamsUser": {
+ "id": "36f49257-c7de-4d64-97f5-e507bdb3323e"
+ }
+ }
+ }
+ },
+ "outputs": {
+ "meeting": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl"
+ }
+ }
+}
+```
+
+The sample presenter layout will feature the `presenter` media source, which takes most of the scene. The support media source will be cropped according to the `supportAspectRatio` and placed at the position specified, which is `topLeft`.
+
+### Custom
+If none of the pre-defined layouts fit your needs, then you can use custom layouts to fit your exact scenario. With custom layouts, you can define sources with different sizes and place them at any position on the scene.
+
+```json
+{
+ "layout": {
+ "kind": "custom",
+ "inputGroups": {
+ "main": {
+ "position": {
+ "x": 0,
+ "y": 0
+ },
+ "width": "100%",
+ "height": "100%",
+ "rows": 2,
+ "columns": 2,
+ "inputIds": [ [ "meeting:not('active')" ] ]
+ },
+ "overlay": {
+ "position": {
+ "x": 480,
+ "y": 270
+ },
+ "width": "50%",
+ "height": "50%",
+ "layer": "overlay",
+ "inputIds": [[ "active" ]]
+ }
+ },
+ "layers": {
+ "overlay": {
+ "zIndex": 2,
+ "visibility": "visible"
+ }
+ }
+ },
+ "inputs": {
+ "active": {
+ "kind": "dominantSpeaker",
+ "call": "meeting"
+ },
+ "meeting": {
+ "teamsMeeting": {
+ "teamsJoinUrl": "https://teamsjoinurl"
+ }
+ }
+ },
+ "outputs": {
+ "meeting": {
+ "teamsMeeting": {
+ "teamsJoinUrl": "https://teamsjoinurl"
+ }
+ }
+ }
+}
+```
+
+The custom layout example above will result in the following composition:
+
+## Outputs
+After media has been composed according to a layout, they can be outputted to your audience in various ways. Currently, you can either send the composed stream to a call or to an RTMP server.
+
+ACS Group Call json:
+```json
+{
+ "outputs": {
+ "groupCallOutput": {
+ "kind": "groupCall",
+ "id": "CALL_ID"
+ }
+ }
+}
+```
+
+Teams Meeting Input json:
+```json
+{
+ "outputs": {
+ "teamsMeetingOutput": {
+ "kind": "teamsMeeting",
+ "teamsJoinUrl": "https://teamsjoinurl",
+ }
+ }
+}
+```
+
+ACS Rooms Input json:
+```json
+{
+ "inputs": {
+ "roomCallInput": {
+ "kind": "room",
+ "id": "ROOM_ID"
+ }
+ }
+}
+```
+
+RTMP Output json
+```json
+{
+ "outputs": {
+ "rtmpOutput": {
+ "kind": "rtmp",
+ "streamUrl": "rtmp://rtmpendpoint",
+ "streamKey": "STREAM_KEY",
+ "resolution": {
+ "width": 1920,
+ "height": 1080
+ },
+ "mode": "push"
+ }
+ }
+}
+```
+
+## Next steps
+
+In this section you learned how to:
+> [!div class="checklist"]
+> - Create a multi-source or single source input
+> - Create various predefined and custom layouts
+> - Create an output
+
+You may also want to:
+ - Learn about [media composition concept](../../concepts/voice-video-calling/media-comp.md)
+<!-- -->
communication-services Get Started With Voice Video Calling Custom Teams Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md
Title: Quickstart - Add voice & video calling to custom Teams client
+ Title: Quickstart - Add voice & video calling as a Teams user to an app
-description: In this quickstart, you'll learn how to add voice & video-calling capabilities to your custom Teams app using Azure Communication Services.
+description: In this quickstart, you'll learn how to add voice & video-calling capabilities as a Teams user to your app using Azure Communication Services.
Last updated 12/1/2021
-# QuickStart: Add 1:1 video calling to your customized Teams application
+# QuickStart: Add 1:1 video calling as a Teams user to your application
[!INCLUDE [Public Preview](../../../communication-services/includes/public-preview-include-document.md)]
container-registry Container Registry Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-concepts.md
A basic manifest for a Linux `hello-world` image looks similar to the following:
} ```
-You can list the manifests for a repository with the Azure CLI command [az acr repository show-manifests][az-acr-repository-show-manifests]:
+You can list the manifests for a repository with the Azure CLI command [az acr manifest list-metadata][az-acr-manifest-list-metadata]:
```azurecli
-az acr repository show-manifests --name <acrName> --repository <repositoryName>
+az acr manifest list-metadata --name <repositoryName> --registry <acrName>
``` For example, list the manifests for the "acr-helloworld" repository: ```azurecli
-az acr repository show-manifests --name myregistry --repository acr-helloworld
+az acr manifest list-metadata --name acr-helloworld --registry myregistry
``` ```output
Learn more about [registry storage](container-registry-storage.md) and [supporte
Learn how to [push and pull images](container-registry-get-started-docker-cli.md) from Azure Container Registry. <!-- LINKS - Internal -->
-[az-acr-repository-show-manifests]: /cli/azure/acr/repository#az_acr_repository_show_manifests
+[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata
container-registry Container Registry Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-delete.md
A [manifest digest](container-registry-concepts.md#manifest-digest) can be assoc
To delete by digest, first list the manifest digests for the repository containing the images you wish to delete. For example: ```azurecli
-az acr repository show-manifests --name myregistry --repository acr-helloworld
+az acr manifest list-metadata --name acr-helloworld --registry myregistry
``` ```output
To maintain the size of a repository or registry, you might need to periodically
The following Azure CLI command lists all manifest digests in a repository older than a specified timestamp, in ascending order. Replace `<acrName>` and `<repositoryName>` with values appropriate for your environment. The timestamp could be a full date-time expression or a date, as in this example. ```azurecli
-az acr repository show-manifests --name <acrName> --repository <repositoryName> \
orderby time_asc -o tsv --query "[?timestamp < '2019-04-05'].[digest, timestamp]"
+az acr manifest list-metadata --name <repositoryName> --registry <acrName> <repositoryName> \
+ --orderby time_asc -o tsv --query "[?timestamp < '2019-04-05'].[digest, timestamp]"
``` After identifying stale manifest digests, you can run the following Bash script to delete manifest digests older than a specified timestamp. It requires the Azure CLI and **xargs**. By default, the script performs no deletion. Change the `ENABLE_DELETE` value to `true` to enable image deletion.
TIMESTAMP=2019-04-05
if [ "$ENABLE_DELETE" = true ] then
- az acr repository show-manifests --name $REGISTRY --repository $REPOSITORY \
+ az acr manifest list-metadata --name $REPOSITORY --registry $REGISTRY \
--orderby time_asc --query "[?timestamp < '$TIMESTAMP'].digest" -o tsv \ | xargs -I% az acr repository delete --name $REGISTRY --image $REPOSITORY@% --yes else echo "No data deleted." echo "Set ENABLE_DELETE=true to enable deletion of these images in $REPOSITORY:"
- az acr repository show-manifests --name $REGISTRY --repository $REPOSITORY \
+ az acr manifest list-metadata --name $REPOSITORY --repository $REGISTRY \
--orderby time_asc --query "[?timestamp < '$TIMESTAMP'].[digest, timestamp]" -o tsv fi ```
As mentioned in the [Manifest digest](container-registry-concepts.md#manifest-di
1. Check manifests for repository *acr-helloworld*: ```azurecli
- az acr repository show-manifests --name myregistry --repository acr-helloworld
+ az acr manifest list-metadata --name acr-helloworld --registry myregistry
```
As mentioned in the [Manifest digest](container-registry-concepts.md#manifest-di
1. Check manifests for repository *acr-helloworld*: ```azurecli
- az acr repository show-manifests --name myregistry --repository acr-helloworld
+ az acr manifest list-metadata --name myregistry --repository acr-helloworld
``` ```output
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
Output is similar to:
} ```
-Run the [az acr repository show-manifests][az-acr-repository-show-manifests] command to see details of the chart stored in the repository. For example:
+Run the [az acr manifest list-metadata][az-acr-manifest-list-metadata] command to see details of the chart stored in the repository. For example:
```azurecli az acr manifest list-metadata \
helm repo remove $ACR_NAME
[az-acr-repository]: /cli/azure/acr/repository [az-acr-repository-show]: /cli/azure/acr/repository#az_acr_repository_show [az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete
-[az-acr-repository-show-manifests]: /cli/azure/acr/repository#az_acr_repository_show_manifests
+[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata
[acr-tasks]: container-registry-tasks-overview.md
container-registry Container Registry Image Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-lock.md
az acr repository update \
### Lock an image by manifest digest
-To lock a *myimage* image identified by manifest digest (SHA-256 hash, represented as `sha256:...`), run the following command. (To find the manifest digest associated with one or more image tags, run the [az acr repository show-manifests][az-acr-repository-show-manifests] command.)
+To lock a *myimage* image identified by manifest digest (SHA-256 hash, represented as `sha256:...`), run the following command. (To find the manifest digest associated with one or more image tags, run the [az acr manifest list-metadata][az-acr-manifest-list-metadata] command.)
```azurecli az acr repository update \
To see the attributes set for an image version or repository, use the [az acr re
For details about delete operations, see [Delete container images in Azure Container Registry][container-registry-delete]. <!-- LINKS - Internal -->
+[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata
[az-acr-repository-update]: /cli/azure/acr/repository#az_acr_repository_update [az-acr-repository-show]: /cli/azure/acr/repository#az_acr_repository_show
-[az-acr-repository-show-manifests]: /cli/azure/acr/repository#az_acr_repository_show_manifests
[azure-cli]: /cli/azure/install-azure-cli [container-registry-delete]: container-registry-delete.md
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md
az acr import \
--image hello-world:latest ```
-You can verify that multiple manifests are associated with this image by running the `az acr repository show-manifests` command:
+You can verify that multiple manifests are associated with this image by running the [az acr manifest list-metadata](/cli/azure/acr/manifest#az-acr-manifest-list-metadata) command:
```azurecli
-az acr repository show-manifests \
- --name myregistry \
- --repository hello-world
+az acr manifest list-metadata \
+ --name hello-world \
+ --registry myregistry
``` To import an artifact by digest without adding a tag:
container-registry Container Registry Oras Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md
az acr repository show-tags \
A repository can have a list of manifests that are both tagged and untagged ```azurecli
-az acr repository show-manifests \
- -n $ACR_NAME \
- --repository $REPO \
- --detail -o jsonc
+az acr manifest list-metadata \
+ --name $REPO \
+ --repository $ACR_NAME \
+ --output jsonc
``` Note the container image manifests have `"tags":`
az acr repository delete \
### View the remaining manifests ```azurecli
-az acr repository show-manifests \
- -n $ACR_NAME \
- --repository $REPO \
+az acr manifest list-metadata \
+ --name $REPO \
+ --registry $ACR_NAME \
--detail -o jsonc ```
container-registry Container Registry Repository Scoped Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repository-scoped-permissions.md
The authentication method depends on the configured action or actions associated
|`content/delete` | `az acr repository delete` in Azure CLI<br/><br/>Example: `az acr repository delete --name myregistry --repository myrepo --username MyToken --password xxxxxxxxxx`| |`content/read` | `docker login`<br/><br/>`az acr login` in Azure CLI<br/><br/>Example: `az acr login --name myregistry --username MyToken --password xxxxxxxxxx` | |`content/write` | `docker login`<br/><br/>`az acr login` in Azure CLI |
- |`metadata/read` | `az acr repository show`<br/><br/>`az acr repository show-tags`<br/><br/>`az acr repository show-manifests` in Azure CLI |
+ |`metadata/read` | `az acr repository show`<br/><br/>`az acr repository show-tags`<br/><br/>`az acr manifest list-metadata` in Azure CLI |
|`metadata/write` | `az acr repository untag`<br/><br/>`az acr repository update` in Azure CLI | ## Examples: Use token
az acr scope-map update \
To update the scope map using the portal, see the [previous section](#update-token-permissions).
-To read metadata in the `samples/hello-world` repository, run the [az acr repository show-manifests][az-acr-repository-show-manifests] or [az acr repository show-tags][az-acr-repository-show-tags] command.
+To read metadata in the `samples/hello-world` repository, run the [az acr manifest list-metadata][az-acr-manifest-list-metadata] or [az acr repository show-tags][az-acr-repository-show-tags] command.
To read metadata, pass the token's name and password to either command. The following example uses the environment variables created earlier in the article:
In the portal, select the token in the **Tokens (Preview)** screen, and select *
<!-- LINKS - Internal --> [az-acr-login]: /cli/azure/acr#az_acr_login
+[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata
[az-acr-repository]: /cli/azure/acr/repository/ [az-acr-repository-show-tags]: /cli/azure/acr/repository/#az_acr_repository_show_tags
-[az-acr-repository-show-manifests]: /cli/azure/acr/repository/#az_acr_repository_show_manifests
[az-acr-repository-delete]: /cli/azure/acr/repository/#az_acr_repository_delete [az-acr-scope-map]: /cli/azure/acr/scope-map/ [az-acr-scope-map-create]: /cli/azure/acr/scope-map/#az_acr_scope_map_create
container-registry Container Registry Retention Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-retention-policy.md
If you enable the preceding policy with a retention period of 0 days, you can qu
az acr repository untag \ --name myregistry --image hello-world:latest ```
-1. Within a few seconds, the untagged manifest is deleted. You can verify the deletion by listing manifests in the repository, for example, using the [az acr repository show-manifests][az-acr-repository-show-manifests] command. If the test image was the only one in the repository, the repository itself is deleted.
+1. Within a few seconds, the untagged manifest is deleted. You can verify the deletion by listing manifests in the repository, for example, using the [az acr manifest list-metadata][az-acr-manifest-list-metadata] command. If the test image was the only one in the repository, the repository itself is deleted.
### Manage a retention policy
You can also set a registry's retention policy in the [Azure portal](https://por
[azure-cli]: /cli/azure/install-azure-cli [az-acr-config-retention-update]: /cli/azure/acr/config/retention#az_acr_config_retention_update [az-acr-config-retention-show]: /cli/azure/acr/config/retention#az_acr_config_retention_show
+[az-acr-manifest-list-metadata]: /cli/azure/acr/manifest#az-acr-manifest-list-metadata
[az-acr-repository-untag]: /cli/azure/acr/repository#az_acr_repository_untag
-[az-acr-repository-show-manifests]: /cli/azure/acr/repository#az_acr_repository_show_manifests
container-registry Push Multi Architecture Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/push-multi-architecture-images.md
A basic manifest for a Linux `hello-world` image looks similar to the following:
} ```
-You can view a manifest in Azure Container Registry using the Azure portal or tools such as the [az acr repository show-manifests](/cli/azure/acr/repository#az-acr-repository-show-manifests) command in the Azure CLI.
+You can view a manifest in Azure Container Registry using the Azure portal or tools such as the [az acr manifest list-metadata](/cli/azure/acr/manifest#az-acr-manifest-list-metadata) command in the Azure CLI.
### Manifest list
You can view a manifest list using the `docker manifest inspect` command. The fo
} ```
-When a multi-arch manifest list is stored in Azure Container Registry, you can also view the manifest list using the Azure portal or with tools such as the [az acr repository show-manifests](/cli/azure/acr/repository#az-acr-repository-how-manifests) command.
+When a multi-arch manifest list is stored in Azure Container Registry, you can also view the manifest list using the Azure portal or with tools such as the [az acr manifest list-metadata](/cli/azure/acr/manifest#az-acr-manifest-list-metadata) command.
## Import a multi-arch image
cosmos-db Bulk Executor Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/bulk-executor-graph-dotnet.md
Title: Bulk ingestion in Azure Cosmos DB Gremlin API using BulkExecutor
-description: Learn how to use the bulk executor library to massively import graph data into an Azure Cosmos DB Gremlin API container.
+ Title: Ingest data in bulk in the Azure Cosmos DB Gremlin API by using a bulk executor library
+description: Learn how to use a bulk executor library to massively import graph data into an Azure Cosmos DB Gremlin API container.
ms.devlang: csharp, java
-# Bulk ingestion in Azure Cosmos DB Gremlin API using BulkExecutor
+# Ingest data in bulk in the Azure Cosmos DB Gremlin API by using a bulk executor library
+ [!INCLUDE[appliesto-gremlin-api](../includes/appliesto-gremlin-api.md)]
-Graph database often has a use case to perform bulk ingestion to refresh the entire graph or update a portion of it. Cosmos DB, which is a distributed database and backbone of Azure Cosmos DB - Gremlin API, is meant to perform if the load is well distributed. BulkExecutor libraries in Cosmos DB designed to exploit this unique capability of Cosmos DB and provide the best performance, refer [here](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk).
+Graph databases often need to ingest data in bulk to refresh an entire graph or update a portion of it. Azure Cosmos DB, a distributed database and the backbone of the Azure Cosmos DB Gremlin API, is meant to perform best when the loads are well distributed. Bulk executor libraries in Azure Cosmos DB are designed to exploit this unique capability of Azure Cosmos DB and provide optimal performance. For more information, see [Introducing bulk support in the .NET SDK](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk).
+
+In this tutorial, you learn how to use the Azure Cosmos DB bulk executor library to import and update *graph* objects into an Azure Cosmos DB Gremlin API container. During this process, you use the library to create *vertex* and *edge* objects programmatically and then insert multiple objects per network request.
-This tutorial provides instructions about using Azure Cosmos DB's bulk executor library to import and update graph objects into an Azure Cosmos DB Gremlin API container. This process makes use to create Vertex and Edge objects programmatically to then insert multiple of them per network request.
+Instead of sending Gremlin queries to a database, where the commands are evaluated and then executed one at a time, you use the bulk executor library to create and validate the objects locally. After the library initializes the graph objects, it allows you to send them to the database service sequentially.
-Instead of sending Gremlin queries to a database, where the command is evaluated and then executed one at a time, using the BulkExecutor library will require to create and validate the objects locally. After initializing, the graph objects, the library allows you to send graph objects to the database service sequentially. Using this method, data ingestion speeds can be increased up to 100x, which makes it an ideal method for initial data migrations or periodical data movement operations.
+By using this method, you can increase data ingestion speeds as much as a hundredfold, which makes it an ideal way to perform initial data migrations or periodic data movement operations.
-It's now available in following flavors:
+The bulk executor library now comes in the following varieties.
## .NET ### Prerequisites+
+Before you begin, make sure that you have the following:
+ * Visual Studio 2019 with the Azure development workload. You can get started with the [Visual Studio 2019 Community Edition](https://visualstudio.microsoft.com/downloads/) for free.
-* An Azure subscription. You can create [a free Azure account here](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cosmos-db). Alternatively, you can create a Cosmos database account with [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
-* An Azure Cosmos DB Gremlin API database with an **unlimited collection**. The guide shows how to get started with [Azure Cosmos DB Gremlin API in .NET](./create-graph-dotnet.md).
-* Git. For more information check out the [Git Downloads page](https://git-scm.com/downloads).
+
+* An Azure subscription. If you don't already have a subscription, you can [create a free Azure account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cosmos-db).
+
+ Alternatively, you can [create a free Azure Cosmos DB account](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+
+* An Azure Cosmos DB Gremlin API database with an *unlimited collection*. To get started, go to [Azure Cosmos DB Gremlin API in .NET](./create-graph-dotnet.md).
+
+* Git. To begin, go to the [git downloads](https://git-scm.com/downloads) page.
+ #### Clone
-To run this sample, run the `git clone` command below:
+
+To use this sample, run the following command:
+ ```bash git clone https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor.git ```
-The sample is available at path .\azure-cosmos-graph-bulk-executor\dotnet\src\
+
+To get the sample, go to `.\azure-cosmos-graph-bulk-executor\dotnet\src\`.
#### Sample+ ```csharp IGraphBulkExecutor graphBulkExecutor = new GraphBulkExecutor("MyConnectionString", "myDatabase", "myContainer");
BulkOperationResponse bulkOperationResponse = await graphBulkExecutor.BulkImport
``` ### Execute
-Modify the following parameters as:
-Parameter|Description
-|
-`ConnectionString`|It is **your .NET SDK endpoint** found in the Overview section of your Azure Cosmos DB Gremlin API database account. It has the format of `https://your-graph-database-account.documents.azure.com:443/`
-`DatabaseName`, `ContainerName`|These parameters are the **target database and container names**.
-`DocumentsToInsert`| Number of documents to be generated (only relevant to generate synthetic data)
-`PartitionKey` | To ensure partition key is specified along with each document while ingestion.
-`NumberOfRUs` | Only relevant if container doesn't exists and needs to be created as part of execution
+Modify the parameters, as described in the following table:
+
+| Parameter|Description |
+|||
+|`ConnectionString`| Your .NET SDK endpoint, which you'll find in the **Overview** section of your Azure Cosmos DB Gremlin API database account. It's formatted as `https://your-graph-database-account.documents.azure.com:443/`.
+`DatabaseName`, `ContainerName`|The names of the target database and container.|
+|`DocumentsToInsert`| The number of documents to be generated (relevant only to synthetic data).|
+|`PartitionKey` | Ensures that a partition key is specified with each document during data ingestion.|
+|`NumberOfRUs` | Is relevant only if a container doesn't already exist and it needs to be created during execution.|
-Download the full sample application in .NET from [here](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/dotnet).
+[Download the full sample application in .NET](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/dotnet).
-## JAVA
+## Java
### Sample usage
-The sample application is provided to illustrate how to use the GraphBulkExecutor package. Samples are available for using either the Domain object annotations or using the POJO objects directly. It's recommended, to try both approaches, to determine which better meets your implementation and performance demands.
+The following sample application illustrates how to use the GraphBulkExecutor package. The samples use either the *domain* object annotations or the *POJO* (plain old Java object) objects directly. We recommend that you try both approaches to determine which one better meets your implementation and performance demands.
### Clone
-To run the sample, run the `git clone` command below:
+
+To use the sample, run the following command:
+ ```bash git clone https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor.git ```
-The sample is available at .\azure-cosmos-graph-bulk-executor\java\
+To get the sample, go to `.\azure-cosmos-graph-bulk-executor\java\`.
### Prerequisites
-To run this sample, you'll need to have the following software:
+To run this sample, you need to have the following software:
* OpenJDK 11 * Maven
-* An Azure Cosmos DB Account configured to use the Gremlin API
+* An Azure Cosmos DB account that's configured to use the Gremlin API
### Sample+ ```java private static void executeWithPOJO(Stream<GremlinVertex> vertices, Stream<GremlinEdge> edges,
private static void executeWithPOJO(Stream<GremlinVertex> vertices,
} ```
-To run the sample, refer the configuration as follows and modify as needed:
### Configuration
-The /resources/application.properties file defines the data required to configure the Cosmos DB the required values are:
+To run the sample, refer to the following configuration and modify it as needed.
+
+The */resources/application.properties* file defines the data that's required to configure Azure Cosmos DB. The required values are described in the following table:
-* **sample.sql.host**: It's the value provided by the Azure Cosmos DB. Ensure you use the ".NET SDK URI", which can be located on the Overview section of the Cosmos DB Account.
-* **sample.sql.key**: You can get the primary or secondary key from the Keys section of the Cosmos DB Account.
-* **sample.sql.database.name**: The name of the database within the Cosmos DB account to run the sample against. If the database isn't found, the sample code will create it.
-* **sample.sql.container.name**: The name of the container within the database to run the sample against. If the container isn't found, the sample code will create it.
-* **sample.sql.partition.path**: If the container needs to be created, this value will be used to define the partitionKey path.
-* **sample.sql.allow.throughput**: The container will be updated to use the throughput value defined here. If you're exploring different throughput options to meet your performance demands, make sure to reset the throughput on the container when done with your exploration. There are costs associated with leaving the container provisioned with a higher throughput.
+| Property | Description |
+| | |
+| `sample.sql.host` | The value that's provided by Azure Cosmos DB. Ensure that you're using the .NET SDK URI, which you'll find in the **Overview** section of the Azure Cosmos DB account.|
+| `sample.sql.key` | You can get the primary or secondary key from the **Keys** section of the Azure Cosmos DB account. |
+| `sample.sql.database.name` | The name of the database within the Azure Cosmos DB account to run the sample against. If the database isn't found, the sample code creates it. |
+| `sample.sql.container.name` | The name of the container within the database to run the sample against. If the container isn't found, the sample code creates it. |
+| `sample.sql.partition.path` | If you need to create the container, use this value to define the `partitionKey` path. |
+| `sample.sql.allow.throughput` | The container will be updated to use the throughput value that's defined here. If you're exploring various throughput options to meet your performance demands, be sure to reset the throughput on the container when you're done with your exploration. There are costs associated with leaving the container provisioned with a higher throughput. |
### Execute
-Once the configuration is modified as per your environment, then run the command:
+After you've modified the configuration according to your environment, run the following command:
```bash mvn clean package ```
-For added safety, you can also run the integration tests by changing the "skipIntegrationTests" value in the pom.xml to
-false.
+For added safety, you can also run the integration tests by changing the `skipIntegrationTests` value in the *pom.xml* file to `false`.
-Assuming the Unit tests were run successfully. You can run the command line to execute the sample code:
+After you've run the unit tests successfully, you can run the sample code:
```bash java -jar target/azure-cosmos-graph-bulk-executor-1.0-jar-with-dependencies.jar -v 1000 -e 10 -d ```
-Running the above commands will execute the sample with a small batch (1k Vertices and roughly 5k Edges). Use the following command lines arguments to tweak the volumes run and which sample version to run.
+Running the preceding command executes the sample with a small batch (1,000 vertices and roughly 5,000 edges). Use the command-line arguments in the following sections to tweak the volumes that are run and which sample version to run.
-### Command line Arguments
+### Command-line arguments
-There are several command line arguments are available while running this sample, which is detailed as:
+Several command-line arguments are available while you're running this sample, as described in the following table:
-* **--vertexCount** (-v): Tells the application how many person vertices to generate.
-* **--edgeMax** (-e): Tells the application what the maximum number of edges to generate for each Vertex. The generator will randomly select a number between 1 and the value provided here.
-* **--domainSample** (-d): Tells the application to run the sample using the Person and Relationship domain structures instead of the GraphBulkExecutors GremlinVertex and GremlinEdge POJOs.
-* **--createDocuments** (-c): Tells the application to use create operations. If not present, the application will default to using upsert operations.
+| Argument&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description |
+| | |
+| `--vertexCount` (`-v`) | Tells the application how many person vertices to generate. |
+| `--edgeMax` (`-e`) | Tells the application the maximum number of edges to generate for each vertex. The generator randomly selects a number from 1 to the value you provide. |
+| `--domainSample` (`-d`) | Tells the application to run the sample by using the person and relationship domain structures instead of the `GraphBulkExecutors`, `GremlinVertex`, and `GremlinEdge` POJOs. |
+| `--createDocuments` (`-c`) | Tells the application to use `create` operations. If the argument isn't present, the application defaults to using `upsert` operations. |
-### Details about the sample
+### Detailed sample information
-#### Person Vertex
+#### Person vertex
-The Person class is a fairly simple domain object that has been decorated with several annotations to help the
-transformation into the GremlinVertex class. They are as follows:
+The person class is a simple domain object that's been decorated with several annotations to help a transformation into the `GremlinVertex` class, as described in the following table:
-* **GremlinVertex**: Notice how we're using the optional "label" parameter to define all Vertices created using this class.
-* **GremlinId**: Being used to define which field will be used as the ID value. While the field name on the Person class is ID, it isn't required.
-* **GremlinProperty**: Is being used on the email field to change the name of the property when stored in the database.
-* **GremlinPartitionKey**: Is being used to define which field on the class contains the partition key. The field name provided here should match the value defined by the partition path on the container.
-* **GremlinIgnore**: Is being used to exclude the isSpecial field from the property being written to the database.
+| Class annotation | Description |
+| | |
+| `GremlinVertex` | Uses the optional `label` parameter to define all vertices that you create by using this class. |
+| `GremlinId` | Used to define which field will be used as the `ID` value. The field name on the person class is ID, but it isn't required. |
+| `GremlinProperty` | Used on the `email` field to change the name of the property when it's stored in the database.
+| `GremlinPartitionKey` | Used to define which field on the class contains the partition key. The field name you provide should match the value that's defined by the partition path on the container. |
+| `GremlinIgnore` | Used to exclude the `isSpecial` field from the property that's being written to the database. |
-#### Relationship Edge
+#### The RelationshipEdge class
-The RelationshipEdge is a fairly versatile domain object. Using the field level label annotation allows for a dynamic
-collection of edge types to be created. The following annotations are represented in this sample domain edge:
+The `RelationshipEdge` class is a versatile domain object. By using the field level label annotation, you can create a dynamic collection of edge types, as shown in the following table:
-* **GremlinEdge**: The GremlinEdge decoration on the class, defines the name of the field for the specified partition key. The value assigned, when the edge document is created, will come from the source vertex information.
-* **GremlinEdgeVertex**: Notice that there are two instances of GremlinEdgeVertex defined. One for each side of the edge (Source and Destination). Our sample has the field's data type as GremlinEdgeVertexInfo. The information provided by GremlinEdgeVertex class is required for the edge to be created correctly in the database. Another option would be to have the data type of the vertices be a class that has been decorated with the GremlinVertex annotations.
-* **GremlinLabel**: The sample edge is using a field to define what the label value is. It allows different labels to be defined while still using the same base domain class.
+| Class annotation | Description |
+| | |
+| `GremlinEdge` | The `GremlinEdge` decoration on the class defines the name of the field for the specified partition key. When you create an edge document, the assigned value comes from the source vertex information. |
+| `GremlinEdgeVertex` | Two instances of `GremlinEdgeVertex` are defined, one for each side of the edge (source and destination). Our sample has the field's data type as `GremlinEdgeVertexInfo`. The information provided by the `GremlinEdgeVertex` class is required for the edge to be created correctly in the database. Another option would be to have the data type of the vertices be a class that has been decorated with the `GremlinVertex` annotations. |
+| `GremlinLabel` | The sample edge uses a field to define the `label` value. It allows various labels to be defined, because it uses the same base domain class. |
-### Output Explained
+### Output explained
-The console will finish its run with a json string describing the run times of the sample. The json string contains the
-following information.
+The console finishes its run with a JSON string that describes the run times of the sample. The JSON string contains the following information:
-* **startTime**: The System.nanoTime() when the process started.
-* **endtime**: The System.nanoTime() when the process completed.
-* **durationInNanoSeconds**: The difference between the endTime and the startTime.
-* **durationInMinutes**: The durationInNanoSeconds converted into minutes. Important to note that durationInMinutes is represented as a float number, not a time value. For example, a value 2.5 would be 2 minutes and 30 seconds.
-* **vertexCount**: The volume of vertices generated which should match the value passed into the command line execution.
-* **edgeCount**: The volume of edges generated which isn't static and it's built with an element of randomness.
-* **exception**: Only populated when there was an exception thrown when attempting to make the run.
+| JSON string | Description |
+| | |
+| startTime | The `System.nanoTime()` when the process started. |
+| endTime | The `System.nanoTime()` when the process finished. |
+| durationInNanoSeconds | The difference between the `endTime` and `startTime` values. |
+| durationInMinutes | The `durationInNanoSeconds` value, converted into minutes. The `durationInMinutes` value is represented as a float number, not a time value. For example, a value of 2.5 represents 2 minutes and 30 seconds. |
+| vertexCount | The volume of generated vertices, which should match the value that's passed into the command-line execution. |
+| edgeCount | The volume of generated edges, which isn't static and is built with an element of randomness. |
+| exception | Populated only if an exception is thrown when you attempt to make the run. |
-#### States Array
+#### States array
-The states array gives insight into how long each step within the execution takes. The steps that occur are:
+The states array gives insight into how long each step within the execution takes. The steps are described in the following table:
-* **Build sample vertices**: The time it takes to fabricate the requested volume of Person objects.
-* **Build sample edges**: The time it takes to fabricate the Relationship objects.
-* **Configure Database**: The amount of time it took to get the database configured based on the values provided in the
- application.properties.
-* **Write Documents**: The total time it took to write the documents to the database.
+| Execution&nbsp;step | Description |
+| | |
+| Build&nbsp;sample&nbsp;vertices | The amount of time it takes to fabricate the requested volume of person objects. |
+| Build sample edges | The amount of time it takes to fabricate the relationship objects. |
+| Configure database | The amount of time it takes to configure the database, based on the values that are provided in `application.properties`. |
+| Write documents | The amount of time it takes to write the documents to the database. |
-Each state will contain the following values:
+Each state contains the following values:
-* **stateName**: The name of the state being reported.
-* **startTime**: The System.nanoTime() when the state started.
-* **endtime**: The System.nanoTime() when the state completed.
-* **durationInNanoSeconds**: The difference between the endTime and the startTime.
-* **durationInMinutes**: The durationInNanoSeconds converted into minutes. Important to note that durationInMinutes is represented as a float number, not a time value. for example, a value 2.5 would be 2 minutes and 30 seconds.
+| State value | Description |
+| | |
+| `stateName` | The name of the state that's being reported. |
+| `startTime` | The `System.nanoTime()` value when the state started. |
+| `endTime` | The `System.nanoTime()` value when the state finished. |
+| `durationInNanoSeconds` | The difference between the `endTime` and `startTime` values. |
+| `durationInMinutes` | The `durationInNanoSeconds` value, converted into minutes. The `durationInMinutes` value is represented as a float number, not a time value. For example, a value of 2.5 represents 2 minutes and 30 seconds. |
## Next steps
-* Review the [BulkExecutor Java, which is Open Source](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/java/src/main/java/com/azure/graph/bulk/impl) for more details about the classes and methods defined in this namespace.
-* Review the [BulkMode, which is part of .NET V3 SDK](../sql/tutorial-sql-api-dotnet-bulk-import.md)
+* For more information about the classes and methods that are defined in this namespace, review the [BulkExecutor Java open source documentation](https://github.com/Azure-Samples/azure-cosmos-graph-bulk-executor/tree/main/java/src/main/java/com/azure/graph/bulk/impl).
+* See [Bulk import data to the Azure Cosmos DB SQL API account by using the .NET SDK](../sql/tutorial-sql-api-dotnet-bulk-import.md) article. This bulk mode documentation is part of the .NET V3 SDK.
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
tags: billing
Previously updated : 07/20/2022 Last updated : 07/22/2022
Later in this article, you'll give permission to the Azure AD app to act by usin
| Role | Actions allowed | Role definition ID | | | | | | EnrollmentReader | Can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | 24f8edb6-1668-4659-b5e2-40bb5f3a7d7e |
-| EA purchaser | Purchase reservation orders and view reservation transactions. Can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | da6647fb-7651-49ee-be91-c43c4877f0c4 |
+| EA purchaser | Purchase reservation orders and view reservation transactions. It has all the permissions of EnrollmentReader, which will in turn have all the permissions of DepartmentReader. It can view usage and charges across all accounts and subscriptions. Can view the Azure Prepayment (previously called monetary commitment) balance associated with the enrollment. | da6647fb-7651-49ee-be91-c43c4877f0c4 |
| DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a | | SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 |
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for direct EA
description: This article explains how enterprise administrators of direct Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 04/28/2022 Last updated : 07/22/2022
You receive an Azure invoice when any of the following events occur during your
- Visual Studio Professional (Annual) - **Marketplace charges** - Azure Marketplace purchases and usage aren't covered by your organization's credit. So, you're invoiced for Marketplace charges despite your credit balance. In the Azure portal, an Enterprise Administrator can enable and disable Marketplace purchases.
-Your invoice displays Azure usage charges with costs associated to them first, followed by any Marketplace charges. If you have a credit balance, it's applied to Azure usage and your invoice will display Azure usage and Marketplace usage without any cost last.
+Your invoice displays Azure usage charges with costs associated to them first, followed by any Marketplace charges. If you have a credit balance, it's applied to Azure usage. Your invoice will display Azure usage and Marketplace usage without any cost last.
### Download your Azure invoices (.pdf)
The following table lists the terms and descriptions shown on the Invoices page:
| PO number | PO number for the invoice or credit memo. | | Total Amount | Total amount of the invoice or credit. |
+## Update a PO number for an upcoming overage invoice
+
+In the Azure portal, a direct enterprise administrator can update the purchase order (PO) for upcoming invoices. The PO number can get updated anytime before the invoice is created during the current billing period.
+
+For a new enrollment, the default PO number is the enrollment number.
+
+If you donΓÇÖt change the PO number, then the same PO number is used for all upcoming invoices.
+
+The EA admin receives an invoice notification email after the end of billing period to update PO number. You can update the PO number up to seven days after receiving email notification.
+
+If you want to update the PO number after your invoice is generated, then contact Azure support in the Azure portal.
+
+To update the PO number for a billing account:
+
+1. Sign in to theΓÇ»[Azure portal](https://portal.azure.com).
+1. Search for **Cost Management + Billing** and then select **Billing scopes**.
+1. Select your billing scope, and then in the left menu underΓÇ»**Settings**, selectΓÇ»**Properties**.
+1. SelectΓÇ»**Update PO number**.
+1. Enter a PO number and then selectΓÇ»**Update**.
+
+Or you can update the PO number from in the Invoices list for the upcoming invoice:
+
+1. Sign in to theΓÇ»[Azure portal](https://portal.azure.com).
+1. Search for **Cost Management + Billing** and then select **Billing scopes**.
+1. Select your billing scope, then in the left menu underΓÇ»**Billing**, selectΓÇ»**Invoices**.
+1. SelectΓÇ»**Update PO number**.
+1. Enter a PO number and then selectΓÇ»**Update**.
+ ## Review credit charges The information in this section describes how you can view the starting balance, ending balance, and credit adjustments for your Azure Prepayment (previously called monetary commitment).
cost-management-billing Understand Mca Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-mca-roles.md
Previously updated : 10/28/2021 Last updated : 07/22/2022
The following table describes the billing roles you use to manage your billing a
|Role|Description| |||
-|Billing account owner|Manage everything for billing account|
+|Billing account owner (called Organization owner for Marketplace purchases) |Manage everything for billing account|
|Billing account contributor|Manage everything except permissions on the billing account| |Billing account reader|Read-only view of everything on billing account| |Billing profile owner|Manage everything for billing profile|
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
Previously updated : 02/09/2022 Last updated : 07/20/2022 # Optimizing performance of the Azure Integration Runtime
Data flows run on Spark clusters that are spun up at run-time. The configuration
For more information how to create an Integration Runtime, see [Integration Runtime](concepts-integration-runtime.md).
+The easiest way to get started with data flow integration runtimes is to choose small, medium, or large from the compute size picker. See the mappings to cluster configurations for those sizes below.
+ ## Cluster type There are two available options for the type of Spark cluster to utilize: general purpose & memory optimized.
If your data flow has many joins and lookups, you may want to use a **memory opt
Data flows distribute the data processing over different nodes in a Spark cluster to perform operations in parallel. A Spark cluster with more cores increases the number of nodes in the compute environment. More nodes increase the processing power of the data flow. Increasing the size of the cluster is often an easy way to reduce the processing time.
-The default cluster size is four driver nodes and four worker nodes. As you process more data, larger clusters are recommended. Below are the possible sizing options:
+The default cluster size is four driver nodes and four worker nodes (small). As you process more data, larger clusters are recommended. Below are the possible sizing options:
| Worker cores | Driver cores | Total cores | Notes | | | | -- | -- |
-| 4 | 4 | 8 | |
-| 8 | 8 | 16 | |
+| 4 | 4 | 8 | Small |
+| 8 | 8 | 16 | Medium |
| 16 | 16 | 32 | | | 32 | 16 | 48 | |
-| 64 | 16 | 80 | |
+| 64 | 16 | 80 | Large |
| 128 | 16 | 144 | | | 256 | 16 | 272 | |
data-factory Connector Amazon S3 Compatible Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-s3-compatible-storage.md
This article outlines how to copy data from Amazon Simple Storage Service (Amazo
## Supported capabilities
-This Amazon S3 Compatible Storage connector is supported for the following activities:
+This Amazon S3 Compatible Storage connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)-- [Delete activity](delete-activity.md)-
-Specifically, this Amazon S3 Compatible Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). The connector uses [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to authenticate requests to S3. You can use this Amazon S3 Compatible Storage connector to copy data from any S3-compatible storage provider. Specify the corresponding service URL in the linked service configuration.
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+Specifically, this Amazon S3 Compatible Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). The connector uses [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to authenticate requests to S3. You can use this Amazon S3 Compatible Storage connector to copy data from any S3-compatible storage provider. Specify the corresponding service URL in the linked service configuration.
## Required permissions
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-simple-storage-service.md
This article outlines how to use Copy Activity to copy data from Amazon Simple S
## Supported capabilities
-This Amazon S3 connector is supported for the following activities:
+This Amazon S3 connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Mapping data flow](concepts-data-flow-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)-- [Delete activity](delete-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312; |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
Specifically, this Amazon S3 connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). You can also choose to [preserve file metadata during copy](#preserve-metadata-during-copy). The connector uses [AWS Signature Version 4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) to authenticate requests to S3.
data-factory Connector Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-cassandra.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Cassandra connector is supported for the following activities:
+This Cassandra connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Cassandra database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
Specifically, this Cassandra connector supports:
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-couchbase.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-This Couchbase connector is supported for the following activities:
+This Couchbase connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from Couchbase to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-file-system.md
This article outlines how to copy data to and from file system. To learn more re
## Supported capabilities
-This file system connector is supported for the following activities:
+This file system connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)-- [Delete activity](delete-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
Specifically, this file system connector supports:
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-ftp.md
This article outlines how to copy data from FTP server. To learn about more, rea
## Supported capabilities
-This FTP connector is supported for the following activities:
+This FTP connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)-- [Delete activity](delete-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
Specifically, this FTP connector supports:
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-cloud-storage.md
This article outlines how to copy data from Google Cloud Storage (GCS). To learn
## Supported capabilities
-This Google Cloud Storage connector is supported for the following activities:
+This Google Cloud Storage connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)-- [Delete activity](delete-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
Specifically, this Google Cloud Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). It takes advantage of GCS's S3-compatible interoperability.
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hdfs.md
This article outlines how to copy data from the Hadoop Distributed File System (
## Supported capabilities
-The HDFS connector is supported for the following activities:
+This HDFS connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source and sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [Delete activity](delete-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
Specifically, the HDFS connector supports:
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-http.md
The difference among this HTTP connector, the [REST connector](connector-rest.md
## Supported capabilities
-This HTTP connector is supported for the following activities:
+This HTTP connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from an HTTP source to any supported sink data store. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
You can use this HTTP connector to:
The following properties are supported for the HTTP linked service:
| type | The **type** property must be set to **HttpServer**. | Yes | | url | The base URL to the web server. | Yes | | enableServerCertificateValidation | Specify whether to enable server TLS/SSL certificate validation when you connect to an HTTP endpoint. If your HTTPS server uses a self-signed certificate, set this property to **false**. | No<br /> (the default is **true**) |
-| authenticationType | Specifies the authentication type. Allowed values are **Anonymous**, **Basic**, **Digest**, **Windows**, and **ClientCertificate**. User-based OAuth isn't supported. You can additionally configure authentication headers in `authHeader` property. See the sections that follow this table for more properties and JSON samples for these authentication types. | Yes |
+| authenticationType | Specifies the authentication type. Allowed values are **Anonymous**, **Basic**, **Digest**, **Windows**, and **ClientCertificate**. You can additionally configure authentication headers in `authHeader` property. See the sections that follow this table for more properties and JSON samples for these authentication types. | Yes |
| authHeaders | Additional HTTP request headers for authentication.<br/> For example, to use API key authentication, you can select authentication type as ΓÇ£AnonymousΓÇ¥ and specify API key in the header. | No | | connectVia | The [Integration Runtime](concepts-integration-runtime.md) to use to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, the default Azure Integration Runtime is used. |No |
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-atlas.md
This article outlines how to use the Copy Activity in an Azure Data Factory or S
## Supported capabilities
-You can copy data from MongoDB Atlas database to any supported sink data store, or copy data from any supported source data store to MongoDB Atlas database. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+This MongoDB Atlas connector is supported for the following capabilities:
+
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
Specifically, this MongoDB Atlas connector supports **versions up to 4.2**.
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb.md
This article outlines how to use the Copy Activity in Azure Data Factory Synapse
## Supported capabilities
-You can copy data from MongoDB database to any supported sink data store, or copy data from any supported source data store to MongoDB database. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+This MongoDB connector is supported for the following capabilities:
+
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
Specifically, this MongoDB connector supports **versions up to 4.2**.
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odata.md
This article outlines how to use Copy Activity in an Azure Data Factory or Synap
## Supported capabilities
-This OData connector is supported for the following activities:
+This OData connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
-You can copy data from an OData source to any supported sink data store. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
Specifically, this OData connector supports:
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odbc.md
This article outlines how to use the Copy Activity in Azure Data Factory to copy
## Supported capabilities
-This ODBC connector is supported for the following activities:
+This ODBC connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9313;|
-You can copy data from ODBC source to any supported sink data store, or copy from any supported source data store to ODBC sink. For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table.
Specifically, this ODBC connector supports copying data from/to **any ODBC-compatible data stores** using **Basic** or **Anonymous** authentication. A **64-bit ODBC driver** is required. For ODBC sink, the service support ODBC version 2.0 standard.
data-factory Connector Oracle Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-cloud-storage.md
This article outlines how to copy data from Oracle Cloud Storage. To learn more,
## Supported capabilities
-This Oracle Cloud Storage connector is supported for the following activities:
+This Oracle Cloud Storage connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)-- [Delete activity](delete-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;|
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
Specifically, this Oracle Cloud Storage connector supports copying files as is or parsing files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md). It takes advantage of Oracle Cloud Storage's S3-compatible interoperability.
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
The difference among this REST connector, [HTTP connector](connector-http.md), a
## Supported capabilities
-You can copy data from a REST source to any supported sink data store. You also can copy data from any supported source data store to a REST sink. For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
+This REST connector is supported for the following capabilities:
+
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+For a list of data stores that are supported as sources/sinks, see [Supported data stores](connector-overview.md#supported-data-stores).
Specifically, this generic REST connector supports:
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sftp.md
This article outlines how to use Copy Activity to copy data from and to the secu
## Supported capabilities
-The SFTP connector is supported for the following activities:
+This SFTP connector is supported for the following capabilities:
-- [Copy activity](copy-activity-overview.md) with [supported source/sink matrix](copy-activity-overview.md)-- [Mapping data flow](concepts-data-flow-overview.md)-- [Lookup activity](control-flow-lookup-activity.md)-- [GetMetadata activity](control-flow-get-metadata-activity.md)-- [Delete activity](delete-activity.md)
+| Supported capabilities|IR |
+|| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
Specifically, the SFTP connector supports:
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
Previously updated : 03/22/2022 Last updated : 07/20/2022 # Data Flow activity in Azure Data Factory and Azure Synapse Analytics
traceLevel | Set logging level of your data flow activity execution | Fine, Coar
### Dynamically size data flow compute at runtime
-The Core Count and Compute Type properties can be set dynamically to adjust to the size of your incoming source data at runtime. Use pipeline activities like Lookup or Get Metadata in order to find the size of the source dataset data. Then, use Add Dynamic Content in the Data Flow activity properties.
-
-> [!NOTE]
-> When choosing driver and worker node cores in Azure Synapse Data Flows, a minimum of 3 nodes will always be utilized.
+The Core Count and Compute Type properties can be set dynamically to adjust to the size of your incoming source data at runtime. Use pipeline activities like Lookup or Get Metadata in order to find the size of the source dataset data. Then, use Add Dynamic Content in the Data Flow activity properties. You can choose small, medium, or large compute sizes. Optionally, pick "Custom" and configure the compute types and number of cores manually.
:::image type="content" source="media/data-flow/dyna1.png" alt-text="Dynamic Data Flow":::
data-factory Data Flow Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-aggregate-functions.md
The following functions are only available in aggregate, pivot, unpivot, and win
| [avg](data-flow-expressions-usage.md#avg) | Gets the average of values of a column. | | [avgIf](data-flow-expressions-usage.md#avgIf) | Based on a criteria gets the average of values of a column. | | [collect](data-flow-expressions-usage.md#collect) | Collects all values of the expression in the aggregated group into an array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small. |
+| [collectUnique](data-flow-expressions-usage.md#collectUnique) | Collects all values of the expression in the aggregated group into a unique array. Structures can be collected and transformed to alternate structures during this process.The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small |
| [count](data-flow-expressions-usage.md#count) | Gets the aggregate count of values. If the optional column(s) is specified, it ignores NULL values in the count. | | [countAll](data-flow-expressions-usage.md#countAll) | Gets the aggregate count of values including NULLs. | | [countDistinct](data-flow-expressions-usage.md#countDistinct) | Gets the aggregate count of distinct values of a set of columns. |
The following functions are only available in aggregate, pivot, unpivot, and win
| [sumDistinct](data-flow-expressions-usage.md#sumDistinct) | Gets the aggregate sum of distinct values of a numeric column. | | [sumDistinctIf](data-flow-expressions-usage.md#sumDistinctIf) | Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column. | | [sumIf](data-flow-expressions-usage.md#sumIf) | Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column. |
+| [topN](data-flow-expressions-usage.md#topN) | Gets the top N values for this column. |
| [variance](data-flow-expressions-usage.md#variance) | Gets the variance of a column. | | [varianceIf](data-flow-expressions-usage.md#varianceIf) | Based on a criteria, gets the variance of a column. | | [variancePopulation](data-flow-expressions-usage.md#variancePopulation) | Gets the population variance of a column. |
The following functions are only available in aggregate, pivot, unpivot, and win
- List of all [metafunctions](data-flow-metafunctions.md). - List of all [window functions](data-flow-window-functions.md). - [Usage details of all data transformation expressions](data-flow-expressions-usage.md).-- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expression-functions.md
Previously updated : 03/05/2022 Last updated : 07/19/2022 # Expression functions in mapping data flow
In Data Factory and Synapse pipelines, use the expression language of the mappin
| [sqrt](data-flow-expressions-usage.md#sqrt) | Calculates the square root of a number. | | [startsWith](data-flow-expressions-usage.md#startsWith) | Checks if the string starts with the supplied string. | | [substring](data-flow-expressions-usage.md#substring) | Extracts a substring of a certain length from a position. Position is 1 based. If the length is omitted, it's defaulted to end of the string. |
+| [substringIndex](data-flow-expressions-usage.md#substringIndex) | Extracts the substring before `count` occurrences of the delimiter. If `count` is positive, everything to the left of the final delimiter (counting from the left) is returned. If `count` is negative, everything to the right of the final delimiter (counting from the right) is returned. |
| [tan](data-flow-expressions-usage.md#tan) | Calculates a tangent value. | | [tanh](data-flow-expressions-usage.md#tanh) | Calculates a hyperbolic tangent value. | | [translate](data-flow-expressions-usage.md#translate) | Replace one set of characters by another set of characters in the string. Characters have 1 to 1 replacement. |
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
Previously updated : 03/05/2022 Last updated : 07/19/2022 # Data transformation expression usage in mapping data flow
Collects all values of the expression in the aggregated group into an array. Str
___
+<a name="collectUnique" ></a>
+
+### <code>collectUnique</code>
+<code><b>collectUnique(<i>&lt;value1&gt;</i> : any) => array</b></code><br/><br/>
+Collects all values of the expression in the aggregated group into a unique array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small.
+* ``collect(salesPerson)``
+* ``collect(firstName + lastName))``
+* ``collect(@(name = salesPerson, sales = salesAmount) )``
+___
++ <a name="columnNames" ></a> ### <code>columnNames</code>
Extracts a substring of a certain length from a position. Position is 1 based. I
* ``substring('Cat in the hat', 100, 100) -> ''`` ___
+<a name="substringIndex" ></a>
+### <code>substringIndex</code>
+<code><b>substringIndex(<i>&lt;string to subset&gt;</i> : string, <i>&lt;delimiter&gt;</i> : string, &lt;count of delimiter occurences&gt;</i> : integral]) => string</b></code><br/><br/>
+Extracts the substring before `count` occurrences of the delimiter. If `count` is positive, everything to the left of the final delimiter (counting from the left) is returned. If `count` is negative, everything to the right of the final delimiter (counting from the right) is returned.
+* ``substringIndex('111-222-333', '-', 1) -> '111'``
+* ``substringIndex('111-222-333', '-', 2) -> '111-222'``
+* ``substringIndex('111-222-333', '-', -1) -> '333'``
+* ``substringIndex('111-222-333', '-', -2) -> '222-333'``
+___
+
+
<a name="sum" ></a> ### <code>sum</code>
Converts any numeric or string to a long value. An optional Java decimal format
* ``toLong('$123', '$###') -> 123`` ___
+
+<a name="topN" ></a>
+
+### <code>topN</code>
+<code><b>topN(<i>&lt;column/expression&gt;</i> : any, <i>&lt;count&gt;</i> : long, <i>&lt;n&gt;</i> : integer) => array</b></code><br/><br/>
+Gets the top N values for this column based on the count argument.
+* ``topN(custId, count, 5)``
+* ``topN(productId, num_sales, 10)``
+___
<a name="toShort" ></a>
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
description: Learn how to use Azure Digital Twins Explorer by following this demo, where you'll use models to instantiate twins and interact with the twin graph. Previously updated : 02/25/2022 Last updated : 07/21/2022
# Quickstart - Get started with a sample scenario in Azure Digital Twins Explorer
-In this quickstart, you'll explore a prebuilt Azure Digital Twins graph using the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md). This tool allows you to visualize and interact with your Azure Digital Twins data within the Azure portal.
+This quickstart is an introduction to Azure Digital Twins, showing how Azure Digital Twins represents data and demonstrating what it's like to interact with a digital twin graph of a physical building. You'll use the [Azure portal site](https://portal.azure.com) and the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md), which is a tool for visualizing and interacting with Azure Digital Twins data in a web browser.
-With Azure Digital Twins, you can create and interact with live models of your real-world environments, which can be part of wider IoT solutions. First, you model individual elements as digital twins. Then you connect them into a knowledge graph that can respond to live events and be queried for information.
+In this quickstart, you'll look at pre-built sample **models** that digitally define the concepts of a *Building*, a *Floor*, and a *Room*, and use these model definitions to create **digital twins** that represent specific floors and rooms from a physical building. These individual twins will be connected into a virtual **twin graph** that reflects their relationships to each other, forming a complete digital representation of the sample building. The graph you'll be working with represents a building that contains two floors, and each floor contains rooms. The graph will look like this image:
-You'll complete the following steps:
-1. Create an Azure Digital Twins instance, and connect to it in Azure Digital Twins Explorer.
-1. Upload prebuilt models and graph data to construct the sample scenario.
-1. Explore the scenario graph that's created.
-1. Make changes to the graph.
-1. Review your learnings from the experience.
-
-The Azure Digital Twins example graph you'll be working with represents a building with two floors and two rooms. Floor0 contains Room0, and Floor1 contains Room1. The graph will look like this image:
+Here are the steps you'll use to explore the graph in this article:
+1. Create an Azure Digital Twins instance, and open it in Azure Digital Twins Explorer.
+1. Upload pre-built models and graph data to construct the sample scenario. Add one more twin manually.
+1. Simulate changing IoT data, and query the graph to see results.
+1. Review your learnings from the experience.
>[!NOTE]
->This quickstart is for exploring a prebuilt graph to understand how Azure Digital Twins represents data. For simplicity, the quickstart does not cover setting up connections between IoT Hub devices and their graph representations. To set up a connected end-to-end flow for your graph, move ahead to the tutorials: [Connect an end-to-end solution](tutorial-end-to-end.md).
+>For simplicity, this quickstart does not cover setting up a live data flow from IoT devices inside the modeled environment, or from other data sources. To set up a simulated end-to-end data flow that drives your twin graph, move ahead to the tutorials: [Connect an end-to-end solution](tutorial-end-to-end.md). For more information on data flow between services and integrating Azure Digital Twins into a wider IoT solution, see [Data ingress and egress](concepts-data-ingress-egress.md).
## Prerequisites You'll need an Azure subscription to complete this quickstart. If you don't have one already, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
-You'll also need to download the materials for the sample graph used in the quickstart. Use the instructions below to download the three required files. Later, you'll follow more instructions to upload them to Azure Digital Twins.
-* [Room.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Room.json): This is a model file representing a room in a building. Navigate to the link, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the following Save As window to save the file somewhere on your machine with the name *Room.json*.
-* [Floor.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Floor.json): This is a model file representing a floor in a building. Navigate to the link, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the following Save As window to save the file to the same location as *Room.json*, under the name *Floor.json*.
-* [buildingScenario.xlsx](https://github.com/Azure-Samples/digital-twins-explorer/raw/main/client/examples/buildingScenario.xlsx): This file contains a graph of room and floor twins, and relationships between them. Depending on your browser settings, selecting this link may download the *buildingScenario.xlsx* file automatically to your default download location, or it may open the file in your browser with an option to download. Here is what that download option looks like in Microsoft Edge:
+You'll also need to download the materials for the sample graph used in the quickstart. Use the instructions below to download the required files. Later, you'll follow more instructions to upload them to Azure Digital Twins.
+* Model files. Navigate to each link below, right-click anywhere on the screen, and select **Save as** in your browser's right-click menu. Use the Save As window to save the file somewhere on your machine.
+ - [Building.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Building.json): This is a model file that digitally defines a building. It specifies that buildings can contain floors.
+ - [Floor.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Floor.json): This is a model file that digitally defines a floor. It specifies that floors can contain rooms.
+ - [Room.json](https://raw.githubusercontent.com/Azure-Samples/digital-twins-explorer/main/client/examples/Room.json): This is a model file that digitally defines a room. It has a temperature property.
+* [buildingScenario.xlsx](https://github.com/Azure-Samples/digital-twins-explorer/raw/main/client/examples/buildingScenario.xlsx): This spreadsheet contains the data for a sample twin graph, including five digital twins representing a specific building with floors and rooms. The twins are based off the generic models, and connected with relationships indicating which elements contain each other. Depending on your browser settings, selecting this link may download the *buildingScenario.xlsx* file automatically to your default download location, or it may open the file in your browser with an option to download. Here is what that download option looks like in Microsoft Edge:
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/download-building-scenario.png" alt-text="Screenshot of the buildingScenario.xlsx file viewed in a Microsoft Edge browser. A button saying Download is highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/download-building-scenario.png":::
You'll also need to download the materials for the sample graph used in the quic
## Set up Azure Digital Twins
-The first step in working with Azure Digital Twins is to create an Azure Digital Twins instance. After you create an instance of the service, you can connect to the instance in Azure Digital Twins Explorer, which you'll use to work with the instance throughout the quickstart.
-
-The rest of this section walks you through the instance creation.
+The first step in working with Azure Digital Twins is to create an Azure Digital Twins instance that will hold all your graph data. In this section, you'll create an instance of the service, and open it in Azure Digital Twins Explorer.
[!INCLUDE [digital-twins-quickstart-setup.md](../../includes/digital-twins-quickstart-setup.md)]
After deployment completes, use the **Go to resource** button to navigate to the
[!INCLUDE [digital-twins-access-explorer.md](../../includes/digital-twins-access-explorer.md)]
-## Upload the sample materials
+## Build out the sample scenario
-Next, you'll import the sample models and graph into Azure Digital Twins Explorer. You'll use the model files and the graph file that you downloaded to your machine in the [Prerequisites](#prerequisites) section.
+Next, you'll use Azure Digital Twins Explorer to set up the sample models and twin graph. You'll start by importing the model files and the twin graph file that you downloaded to your machine in the [Prerequisites](#prerequisites) section. Then, you'll finish the scenario by creating one more twin manually.
### Models
-The first step in an Azure Digital Twins solution is to define the vocabulary for your environment. You'll create custom *models* that describe the types of entity that exist in your environment.
-
-Each model is written in a language like [JSON-LD](https://json-ld.org/) called *Digital Twin Definition Language (DTDL)*. Each model describes a single type of entity in terms of its properties, telemetry, relationships, and components. Later, you'll use these models as the basis for digital twins that represent specific instances of these types.
+The first step in creating an Azure Digital Twins graph is to define the vocabulary for your environment. *Models* are generic definitions for each type of entity that exists in your environment. This sample building scenario contains a building, floors, and rooms, so you'll need one model definition describing what a *Building* is, one model definition describing what a *Floor* is, and one model definition describing what a *Room* is. Later, you can create *digital twins* that are instances of these models, representing specific buildings, floors, and rooms.
-Typically, when you create a model, you'll complete three steps:
-
-1. Write the model definition. In the quickstart, this step is already done as part of the sample solution.
-1. Validate it to make sure the syntax is accurate. In the quickstart, this step is already done as part of the sample solution.
-1. Upload it to your Azure Digital Twins instance.
+Models for Azure Digital Twins are written in *Digital Twin Definition Language (DTDL)*, a data object language similar to [JSON-LD](https://json-ld.org/). Each model describes a single type of entity in terms of its properties, telemetry, relationships, and components.
-For this quickstart, the model files are already written and validated for you. They're included with the solution you downloaded. In this section, you'll upload two prewritten models to your instance to define these components of a building environment:
-
-* Floor
-* Room
+For this quickstart, the model files have already been written for you. You downloaded *Building.json*, *Floor.json*, and *Room.json* in the [Prerequisites](#prerequisites) section, and now you'll upload them to your Azure Digital Twins instance using Azure Digital Twins Explorer.
#### Upload the models (.json files)
-Follow these steps to upload models (the *.json* files you downloaded earlier).
+In Azure Digital Twins Explorer, follow these steps to upload the *Building*, *Floor*, and *Room* models (the *.json* files you downloaded earlier).
1. In the **Models** panel, select the **Upload a Model** icon that shows an arrow pointing upwards. :::image type="content" source="media/quickstart-azure-digital-twins-explorer/upload-model.png" alt-text="Screenshot of the Azure Digital Twins Explorer, highlighting the Models panel and the 'Upload a Model' icon in it." lightbox="media/quickstart-azure-digital-twins-explorer/upload-model.png":::
-1. In the Open window that appears, navigate to the folder containing the *Room.json* and *Floor.json* files that you downloaded earlier.
-1. Select *Room.json* and *Floor.json*, and select **Open** to upload them both.
+1. In the Open window that appears, navigate to the folder containing the downloaded *.json* files on your machine.
+1. Select *Building.json*, *Floor.json*, and *Room.json*, and select **Open** to upload them all at once.
Azure Digital Twins Explorer will upload these model files to your Azure Digital Twins instance. They should show up in the **Models** panel and display their friendly names and full model IDs.
-You can select **View Model** for either model to see the DTDL code behind it.
+You can select **View Model** from any of the models' options to see the DTDL code that defines each model type.
- :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/model-info.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing the Models panel with two model definitions listed inside, Floor and Room." lightbox="media/quickstart-azure-digital-twins-explorer/model-info.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::column:::
- :::column-end:::
### Twins and the twin graph
-Now that some models have been uploaded to your Azure Digital Twins instance, you can add *digital twins* based on the model definitions.
-
-*Digital twins* represent the actual entities within your business environment. They can be things like sensors on a farm, lights in a car, orΓÇöin this quickstartΓÇörooms on a building floor. You can create many twins of any given model type, such as multiple rooms that all use the Room model. You connect them with relationships into a *twin graph* that represents the full environment.
+Now that some model definitions have been uploaded to your Azure Digital Twins instance, you can use these definitions to create *digital twins* for the elements in your environment.
-In this section, you'll upload pre-created twins that are connected into a pre-created graph. The graph contains two floors and two rooms, connected in the following layout:
+Every digital twin in your solution represents an entity from the physical environment. You can create many twins based on the same model type, like multiple room twins that all use the *Room* model. In this quickstart, you'll need a digital twin for the building, and a digital twin for each floor and room in the building. The twins will be connected with relationships into a *twin graph* that represents the full building environment.
-* Floor0
- - Contains Room0
-* Floor1
- - Contains Room1
+In this section, you'll upload a pre-created graph containing a building twin, two floor twins, and two room twins.
#### Import the graph (.xlsx file)
-Follow these steps to import the graph (the *.xlsx* file you downloaded earlier).
+In Azure Digital Twins Explorer, follow these steps to import the sample graph (the *.xlsx* file you downloaded earlier).
1. In the **Twin Graph** panel, select the **Import Graph** icon that shows an arrow pointing into a cloud. :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-import.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The Import Graph button is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-import.png":::
-2. In the Open window, navigate to the *buildingScenario.xlsx* file you downloaded earlier. This file contains a description of the sample graph. Select **Open**.
+2. In the Open window, navigate to the *buildingScenario.xlsx* file you downloaded earlier. This file contains twin and relationship data for the sample graph. Select **Open**.
After a few seconds, Azure Digital Twins Explorer opens an **Import** view that shows a preview of the graph to be loaded.
Follow these steps to import the graph (the *.xlsx* file you downloaded earlier)
4. Azure Digital Twins Explorer will use the uploaded file to create the requested twins and relationships between them. Make sure you see the following dialog box indicating that the import was successful before moving on.
- :::row:::
- :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/import-success.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing a dialog box indicating graph import success." lightbox="media/quickstart-azure-digital-twins-explorer/import-success.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
- :::row-end:::
+ :::image type="content" source="media/quickstart-azure-digital-twins-explorer/import-success.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing a dialog box indicating graph import success." lightbox="media/quickstart-azure-digital-twins-explorer/import-success.png":::
Select **Close**.
Follow these steps to import the graph (the *.xlsx* file you downloaded earlier)
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/run-query.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting the 'Run Query' button in the upper-right corner of the window." lightbox="media/quickstart-azure-digital-twins-explorer/run-query.png":::
-This action runs the default query to select and display all digital twins. Azure Digital Twins Explorer retrieves all twins and relationships from the service. It draws the graph defined by them in the **Twin Graph** panel.
+This action runs the default query to select and display all digital twins. Azure Digital Twins Explorer retrieves all twins and relationships from the service. It draws the graph defined by them in the **Twin Graph** panel. Now you can see the uploaded graph of the sample scenario.
-## Explore the graph
-Now you can see the uploaded graph of the sample scenario.
+The circles (graph "nodes") represent digital twins. The lines represent relationships. The BuildingA twin "contains" the Floor0 and Floor1 twins, the Floor0 twin "contains" Room0, and the Floor1 twin "contains" Room1. If you're using a mouse, you can click and drag in the graph to move around elements.
+#### Add another twin
-The circles (graph "nodes") represent digital twins. The lines represent relationships. The Floor0 twin contains Room0, and the Floor1 twin contains Room1.
+You can continue to edit the structure of a digital twin graph after it's been created. Imagine that another room has recently been constructed on Floor1 of this example building. In this section, you'll add a new twin to the graph, to represent the new room.
-If you're using a mouse, you can click and drag in the graph to move elements around.
+Start by selecting the model that defines the type of twin you want to create. In the **Models** panel on the left, open the options menu for the **Room** model. Select **Create a Twin** to create a new instance of this model type.
-### View twin properties
-You can select a twin to see a list of its properties and their values in the **Twin Properties** panel.
+Enter *Room2* for the **New Twin name** and select **Save**. This will create a new digital twin, which is not yet connected by relationships to the rest of the graph.
-Here are the properties of Room0:
+Next, you'll add a relationship to show that Floor1 contains Room2. Use the CTRL/CMD or SHIFT keys to simultaneously select Floor1 and Room2 in the graph. When both twins are selected, right-click Room2 and choose **Add relationships**.
- :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room0.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting the Twin Properties panel, which shows $dtId, Temperature, and Humidity properties for Room0." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room0.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
-Room0 has a temperature of 70.
+This will open a **Create Relationship** dialog that's pre-filled with the details of a "contains" relationship from Floor1 to Room2. Select **Save**.
-Here are the properties of Room1:
- :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/properties-room1.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting the Twin Properties panel, which shows $dtId, Temperature, and Humidity properties for Room1." lightbox="media/quickstart-azure-digital-twins-explorer/properties-room1.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
+Now Room2 is connected in the graph. If you're using a mouse, you can click and drag twins in the graph to arrange them into a configuration that you like.
-Room1 has a temperature of 80.
-### Query the graph
+### View twin properties
+
+You can select a twin to see a list of its properties and their values in the **Twin Properties** panel.
+
+Here are the properties of Room0. Notice that Room0 has a temperature of 70.
+
-In Azure Digital Twins, you can query your twin graph to answer questions about your environment, using the SQL-style *Azure Digital Twins query language*.
+Here are the properties of Room1. Notice that Room1 has a temperature of 80.
-One way to query the twins in your graph is by their properties. Querying based on properties can help answer questions about your environment. For example, you can find outliers in your environment that might need attention.
-In this section, you'll run a query to answer the question of how many twins in your environment have a temperature above 75.
+Room2 doesn't have values set for its properties yet, since this twin was created manually. To set its property values, edit the fields so that humidity is 50 and temperature is 72. Select the **Save** icon.
-To see the answer, run the following query in the **Query Explorer** panel.
+
+## Query changing IoT data
+
+In Azure Digital Twins, you can query your twin graph to answer questions about your environment, using the SQL-style *Azure Digital Twins query language*. One way to query the twins in your graph is by their properties. Querying based on properties can help answer questions aboutΓÇöor identify outliers inΓÇöyour environment. In a fully connected, data-driven scenario, the properties of your twins will change frequently in response to IoT data from the sensors in your environment, or other connected data sources. In this quickstart, you'll change the values manually to simulate a changing sensor reading.
+
+Start by running a query to see how many twins in your environment have a temperature above 75. Run the following query in the **Query Explorer** panel.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="TemperatureQuery":::
-Recall from viewing the twin properties earlier that Room0 has a temperature of 70, and Room1 has a temperature of 80. The Floor twins don't have a Temperature property at all. For these reasons, only Room1 shows up in the results here.
+Recall from viewing the twin properties earlier that Room0 has a temperature reading of 70, Room1 has a temperature reading of 80, and Room2 has a temperature reading of 72. The building and floor twins don't have a temperature property at all. For these reasons, only Room1 shows up in the results here.
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/result-query-property-before.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing the results of property query, which shows only Room1." lightbox="media/quickstart-azure-digital-twins-explorer/result-query-property-before.png"::: >[!TIP]
-> Other comparison operators (<,>, =, or !=) are also supported within the preceding query. You can try plugging these operators, different values, or different twin properties into the query to try out answering your own questions.
+> Other comparison operators (<,>, =, or !=) are also supported in queries. You can try plugging these operators, different values, or different twin properties into the query to try out answering your own questions.
-## Edit data in the graph
+### Edit temperature data
-In a fully connected Azure Digital Twins solution, the twins in your graph can receive live updates from real IoT devices and update their properties to stay synchronized with your real-world environment. You can also manually set the properties of the twins in your graph, using Azure Digital Twins Explorer or another development interface (like the APIs or Azure CLI).
+In a fully connected Azure Digital Twins solution, the twins in your graph receive live updates from real IoT devices and other data sources, and update their properties automatically to stay synchronized with your real-world environment. For simplicity in this quickstart, you'll use Azure Digital Twins Explorer here to manually set the temperature reading of Room0 to 76.
-For simplicity, you'll use Azure Digital Twins Explorer here to manually set the temperature of Room0 to 76.
-
-First, rerun the following query to select all digital twins. This will display the full graph once more in the **Twin Graph** panel.
+First, rerun the following query to select all digital twins. This will display the full graph again in the **Twin Graph** panel.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="GetAllTwins"::: Select **Room0** to bring up its property list in the **Twin Properties** panel.
-The properties in this list are editable. Select the temperature value of **70** to enable entering a new value. Enter *76* and select the **Save** icon to update the temperature.
+Change the temperature value from **70** to *76*, and select the **Save** icon to update the temperature.
- :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png" alt-text="Screenshot of the Azure Digital Twins Explorer highlighting that the Twin Properties panel is showing properties that can be edited for Room0." lightbox="media/quickstart-azure-digital-twins-explorer/new-properties-room0.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
After a successful property update, you'll see a **Patch Information** box showing the patch code that was used behind the scenes with the [Azure Digital Twins APIs](concepts-apis-sdks.md) to make the update.
- :::column:::
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/patch-information.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing Patch Information for the temperature update." lightbox="media/quickstart-azure-digital-twins-explorer/patch-information.png":::
- :::column-end:::
- :::column:::
- :::column-end:::
**Close** the patch information.
-### Query to see the result
+### Query to see the new result
-To verify that the graph successfully registered your update to the temperature for Room0, rerun the query from earlier to get all the twins in the environment with a temperature above 75.
+To see the new temperature for Room0 reflected in the graph, rerun the query from earlier to get all the twins in the environment with a temperature above 75.
:::code language="sql" source="~/digital-twins-docs-samples/queries/examples.sql" id="TemperatureQuery":::
-Now that the temperature of Room0 has been changed from 70 to 76, both twins should show up in the result.
+Now that the temperature of Room0 has been changed from 70 to 76, both Room0 and Room1 should show up in the result.
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/result-query-property-after.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing the results of property query, which shows both Room0 and Room1." lightbox="media/quickstart-azure-digital-twins-explorer/result-query-property-after.png"::: ## Review and contextualize learnings
-In this quickstart, you created an Azure Digital Twins instance and used Azure Digital Twins Explorer to populate it with a sample scenario.
+In this quickstart, you created an Azure Digital Twins instance and used Azure Digital Twins Explorer to populate it with a sample scenario. You also added a digital twin manually.
-You then explored the graph, by:
+Then, you explored the graph, including...
* Using a query to answer a question about the scenario. * Editing a property on a digital twin. * Running the query again to see how the answer changed as a result of your update.
-The intent of this exercise is to demonstrate how you can use the Azure Digital Twins graph to answer questions about your environment, even as the environment continues to change.
+The intent of this exercise is to demonstrate how you can use the Azure Digital Twins graph to answer questions about your environment, especially as IoT environments continue to change.
+
+In this quickstart, you made the temperature update manually. It's common in Azure Digital Twins to connect digital twins to real IoT devices so that they receive updates automatically, based on telemetry data. You can also [connect other data sources](concepts-data-ingress-egress.md#data-ingress), integrating data from different systems and defining your own logic for how twins are updated. In this way, you can build a live graph that always reflects the real state of your environment. You can use queries to get information about what's happening in your environment in real time.
-In this quickstart, you made the temperature update manually. It's common in Azure Digital Twins to connect digital twins to real IoT devices so that they receive updates automatically, based on telemetry data. In this way, you can build a live graph that always reflects the real state of your environment. You can use queries to get information about what's happening in your environment in real time.
+You can also export Azure Digital Twins data to historical tracking, data analytics, and AI services to enable greater insights and perform environment simulations. Integrating Azure Digital Twins into your IoT solutions can help you more effectively track the past, control the present, and predict the future.
## Clean up resources
To clean up after this quickstart, choose which Azure Digital Twins resources yo
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/delete-instance.png" alt-text="Screenshot of the Overview page for an Azure Digital Twins instance in the Azure portal. The Delete button is highlighted.":::
-You may also want to delete the sample project folder from your local machine.
+You may also want to delete the sample project files from your local machine.
## Next steps
dns Dns Getstarted Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-powershell.md
description: Learn how to create a DNS zone and record in Azure DNS. This is a s
Previously updated : 04/23/2021 Last updated : 07/21/2022
frontdoor Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/endpoint.md
The endpoint domain is accessible when you associate it with a route.
When you delete and redeploy an endpoint, you might expect to get the same pseudorandom hash value, and therefore the same endpoint domain name. Front Door enables you to control how the pseudorandom hash values are reused on an endpoint-by-endpoint basis.
-An endpoint's domain can be reused within the same tenant, subscription, or resource group scope level. You can also choose to not allow the reuse of an endpoint domain. By default, your allow reuse of the endpoint domain within the same Azure Active Directory tenant.
+An endpoint's domain can be reused within the same tenant, subscription, or resource group scope level. You can also choose to not allow the reuse of an endpoint domain. By default, Front Door allows reuse of the endpoint domain within the same Azure Active Directory tenant.
You can use Bicep, an Azure Resource Manager template (ARM template), the Azure CLI, or Azure PowerShell to configure the scope level of the endpoint's domain reuse behavior. You can also configure it for all Front Door endpoints in your whole organization by using Azure Policy. The Azure portal uses the scope level you define through the command line once it has been changed.
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
Title: Azure HDInsight architecture with Enterprise Security Package description: Learn how to plan Azure HDInsight security with Enterprise Security Package.-
hdinsight Apache Hadoop On Premises Migration Best Practices Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-architecture.md
Title: 'Architecture: On-premises Apache Hadoop to Azure HDInsight' description: Learn architecture best practices for migrating on-premises Hadoop clusters to Azure HDInsight.-
hdinsight Apache Hadoop On Premises Migration Best Practices Data Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-data-migration.md
Title: 'Data migration: On-premises Apache Hadoop to Azure HDInsight' description: Learn data migration best practices for migrating on-premises Hadoop clusters to Azure HDInsight.-
There are two main options to migrate data from on-premises to Azure environment
* Transfer data over network with TLS * Over internet - You can transfer data to Azure storage over a regular internet connection using any one of several tools such as: Azure Storage Explorer, AzCopy, Azure PowerShell, and Azure CLI. For more information, see [Moving data to and from Azure Storage](../../storage/common/storage-choose-data-transfer-solution.md).
- * Express Route - ExpressRoute is an Azure service that lets you create private connections between Microsoft datacenters and infrastructure thatΓÇÖs on your premises or in a colocation facility. ExpressRoute connections don't go over the public Internet, and offer higher security, reliability, and speeds with lower latencies than typical connections over the Internet. For more information, see [Create and modify an ExpressRoute circuit](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md).
+ * Express Route - ExpressRoute is an Azure service that lets you create private connections between Microsoft datacenters and infrastructure that's on your premises or in a colocation facility. ExpressRoute connections don't go over the public Internet, and offer higher security, reliability, and speeds with lower latencies than typical connections over the Internet. For more information, see [Create and modify an ExpressRoute circuit](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md).
* Data Box online data transfer - Data Box Edge and Data Box Gateway are online data transfer products that act as network storage gateways to manage data between your site and Azure. Data Box Edge, an on-premises network device, transfers data to and from Azure and uses artificial intelligence (AI)-enabled edge compute to process data. Data Box Gateway is a virtual appliance with storage gateway capabilities. For more information, see [Azure Data Box Documentation - Online Transfer](../../databox-online/index.yml). * Shipping data Offline
- Data Box offline data transfer - Data Box, Data Box Disk, and Data Box Heavy devices help you transfer large amounts of data to Azure when the network isnΓÇÖt an option. These offline data transfer devices are shipped between your organization and the Azure datacenter. They use AES encryption to help protect your data in transit, and they undergo a thorough post-upload sanitization process to delete your data from the device. For more information on the Data Box offline transfer devices, see [Azure Data Box Documentation - Offline Transfer](../../databox/index.yml). For more information on migration of Hadoop clusters, see [Use Azure Data Box to migrate from an on-premises HDFS store to Azure Storage](../../storage/blobs/data-lake-storage-migrate-on-premises-hdfs-cluster.md).
+ Data Box offline data transfer - Data Box, Data Box Disk, and Data Box Heavy devices help you transfer large amounts of data to Azure when the network isn't an option. These offline data transfer devices are shipped between your organization and the Azure datacenter. They use AES encryption to help protect your data in transit, and they undergo a thorough post-upload sanitization process to delete your data from the device. For more information on the Data Box offline transfer devices, see [Azure Data Box Documentation - Offline Transfer](../../databox/index.yml). For more information on migration of Hadoop clusters, see [Use Azure Data Box to migrate from an on-premises HDFS store to Azure Storage](../../storage/blobs/data-lake-storage-migrate-on-premises-hdfs-cluster.md).
The following table has approximate data transfer duration based on the data volume and network bandwidth. Use a Data box if the data migration is expected to take more than three weeks.
hdinsight Apache Hadoop On Premises Migration Best Practices Security Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-security-devops.md
Title: 'Security: Migrate on-premises Apache Hadoop to Azure HDInsight' description: Learn security and DevOps best practices for migrating on-premises Hadoop clusters to Azure HDInsight.-
hdinsight Apache Hadoop On Premises Migration Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-storage.md
Title: 'Storage: Migrate on-premises Apache Hadoop to Azure HDInsight' description: Learn storage best practices for migrating on-premises Hadoop clusters to Azure HDInsight.-
hdinsight Apache Hadoop On Premises Migration Motivation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-motivation.md
Title: 'Benefits: Migrate on-premises Apache Hadoop to Azure HDInsight' description: Learn the motivation and benefits for migrating on-premises Hadoop clusters to Azure HDInsight.- - Last updated 04/28/2022
hdinsight Hdinsight Sync Aad Users To Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sync-aad-users-to-cluster.md
Title: Synchronize Azure Active Directory users to HDInsight cluster description: Synchronize authenticated users from Azure Active Directory to an HDInsight cluster.-
hdinsight Hdinsight Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upgrade-cluster.md
Title: Migrate cluster to a newer version description: Learn guidelines to migrate your Azure HDInsight cluster to a newer version.-
As mentioned above, Microsoft recommends that HDInsight clusters be regularly mi
* The cluster version is [Retired](hdinsight-retired-versions.md) or in [Basic support](hdinsight-36-component-versioning.md) and you are having a cluster issue that would be resolved with a newer version. * The root cause of a cluster issue is determined to be related to an undersized VM. [View Microsoft's recommended node configuration](hdinsight-supported-node-configuration.md). * A customer opens a support case and the Microsoft engineering team determines the issue has already been fixed in a newer cluster version.
-* A default metastore database (Ambari, Hive, Oozie, Ranger) has reached itΓÇÖs utilization limit. Microsoft will ask you to recreate the cluster using a [custom metastore](hdinsight-use-external-metadata-stores.md#custom-metastore) database.
+* A default metastore database (Ambari, Hive, Oozie, Ranger) has reached it's utilization limit. Microsoft will ask you to recreate the cluster using a [custom metastore](hdinsight-use-external-metadata-stores.md#custom-metastore) database.
* The root cause of a cluster issue is due to an **Unsupported Operation**. Here are some of the common unsupported operations: * **Moving or Adding a service in Ambari**. When viewing information on the cluster services in Ambari, one of the actions available from the Service Actions menu is **Move [Service Name]**. Another action is **Add [Service Name]**. Both of these options are unsupported. * **Python package corruption**. HDInsight clusters depend on the built-in Python environments, Python 2.7 and Python 3.5. Directly installing custom packages in those default built-in environments may cause unexpected library version changes and break the cluster. Learn how to [safely install custom external Python packages](./spark/apache-spark-python-package-installation.md#safely-install-external-python-packages) for your Spark applications.
hdinsight Apache Hive Warehouse Connector Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-operations.md
Title: Apache Spark operations supported by Hive Warehouse Connector in Azure HDInsight description: Learn about the different capabilities of Hive Warehouse Connector on Azure HDInsight.--++ Previously updated : 05/22/2020 Last updated : 07/22/2022 # Apache Spark operations supported by Hive Warehouse Connector in Azure HDInsight
hdinsight Hive Llap Sizing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-llap-sizing-guide.md
- Last updated 07/19/2022
hdinsight Llap Schedule Based Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/llap-schedule-based-autoscale-best-practices.md
- Last updated 05/25/2022
Feature Supportability with HDInsight 4.0 Interactive Query(LLAP) Autoscale
> [!NOTE]
-> It's recommended to have sufficient gap between two schedules so that data cache is efficiently utilized i.e schedule scale upΓÇÖs when there is peak usage and scale downΓÇÖs when there is no usage.
+> It's recommended to have sufficient gap between two schedules so that data cache is efficiently utilized i.e schedule scale up's when there is peak usage and scale down's when there is no usage.
### **Interactive Query Autoscale FAQs**
hdinsight Troubleshoot Workload Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/troubleshoot-workload-management-issues.md
- Last updated 07/19/2022
The above is a design limitation of WLM feature. You can work around this featur
3. When a WLM Tez AM is manually killed, then some of the queries may fail with following pattern. <br/>These queries should run without any issues on resubmission. ``` java.util.concurrent.CancellationException: Task was cancelled.
- at com.google.common.util.concurrent.AbstractFuture.cancellationExceptionWithCause(AbstractFuture.java:1349) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:550) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:513) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:90) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:237) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.Futures.getDone(Futures.java:1064) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1013) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1137) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:957) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.AbstractFuture.cancel(AbstractFuture.java:611) ~[guava-28.0-jre.jar:?]
- at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.cancel(AbstractFuture.java:118) ~[guava-28.0-jre.jar:?]
- at org.apache.hadoop.hive.ql.exec.tez.WmTezSession$TimeoutRunnable.run(WmTezSession.java:264) ~[hive-exec-3.1.3.4.1.3.6.jar:3.1.3.4.1.3.6]
- at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_275]
- at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_275]
- at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_275]
- at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_275]
- at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_275]
- at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_275]
- at java.lang.Thread.run(Thread.java:748) [?:1.8.0_275]
+ at com.google.common.util.concurrent.AbstractFuture.cancellationExceptionWithCause(AbstractFuture.java:1349) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:550) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:513) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:90) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:237) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.Futures.getDone(Futures.java:1064) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1013) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1137) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:957) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.AbstractFuture.cancel(AbstractFuture.java:611) ~[guava-28.0-jre.jar:?]
+ at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.cancel(AbstractFuture.java:118) ~[guava-28.0-jre.jar:?]
+ at org.apache.hadoop.hive.ql.exec.tez.WmTezSession$TimeoutRunnable.run(WmTezSession.java:264) ~[hive-exec-3.1.3.4.1.3.6.jar:3.1.3.4.1.3.6]
+ at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_275]
+ at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_275]
+ at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_275]
+ at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_275]
+ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_275]
+ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_275]
+ at java.lang.Thread.run(Thread.java:748) [?:1.8.0_275]
``` ## Known issues
hdinsight Apache Kafka Auto Create Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-auto-create-topics.md
Title: Enable automatic topic creation in Apache Kafka - Azure HDInsight description: Learn how to configure Apache Kafka on HDInsight to automatically create topics. You can configure Kafka by setting `auto.create.topics.enable` to true through Ambari. Or during cluster creation through PowerShell or Resource Manager templates.-
hdinsight Apache Spark Intellij Tool Failure Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-failure-debug.md
Title: 'Debug Spark job with IntelliJ Azure Toolkit (preview) - HDInsight' description: Guidance using HDInsight Tools in Azure Toolkit for IntelliJ to debug applications keywords: debug remotely intellij, remote debugging intellij, ssh, intellij, hdinsight, debug intellij, debugging--+ Last updated 06/23/2022
hdinsight Apache Spark Jupyter Spark Use Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-spark-use-bicep.md
Title: 'Quickstart: Create Apache Spark cluster using Bicep - Azure HDInsight' description: This quickstart shows how to use Bicep to create an Apache Spark cluster in Azure HDInsight, and run a Spark SQL query.-- Previously updated : 05/02/2022++ Last updated : 07/22/2022
hdinsight Apache Spark Manage Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-manage-dependencies.md
Title: Manage Spark application dependencies on Azure HDInsight description: This article provides an introduction of how to manage Spark dependencies in HDInsight Spark cluster for PySpark and Scala applications.--++ Previously updated : 09/09/2020 Last updated : 07/22/2022 # Customer intent: As a developer for Apache Spark and Apache Spark in Azure HDInsight, I want to learn how to manage my Spark application dependencies and install packages on my HDInsight cluster.
hdinsight Apache Troubleshoot Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-troubleshoot-spark.md
Title: Troubleshoot Apache Spark in Azure HDInsight description: Get answers to common questions about working with Apache Spark and Azure HDInsight. - Last updated 08/22/2019
hdinsight Spark Dotnet Version Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-dotnet-version-update.md
Title: Updating .NET for Apache Spark to version v1.0 in HDI description: Learn about updating .NET for Apache Spark version to 1.0 in HDI and how that affects your existing code and clusters.--++ Previously updated : 01/05/2021 Last updated : 07/22/2022 # Updating .NET for Apache Spark to version v1.0 in HDInsight
This document talks about the first major version of [.NET for Apache Spark](htt
## About .NET for Apache Spark version 1.0.0
-This is the first [major official release](https://github.com/dotnet/spark/releases/tag/v1.0.0) of .NET for Apache Spark and provides DataFrame API completeness for Spark 2.4.x as well as Spark 3.0.x along with other features. For a complete list of all features, improvements and bug fixes, see the official [v1.0.0 release notes](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md).
-Another important thing to note is that this version is **not** compatible with prior versions of `Microsoft.Spark` and `Microsoft.Spark.Worker`. Check out the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) if you are planning to upgrade your .NET for Apache Spark application to be compatible with v1.0.0.
+This is the first [major official release](https://github.com/dotnet/spark/releases/tag/v1.0.0) of .NET for Apache Spark and provides DataFrame API completeness for Spark 2.4.x and Spark 3.0.x along with other features. For a complete list of all features, improvements and bug fixes, see the official [v1.0.0 release notes](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md).
+Another important thing to note is that this version is **not** compatible with prior versions of `Microsoft.Spark` and `Microsoft.Spark.Worker`. Check out the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) if you're planning to upgrade your .NET for Apache Spark application to be compatible with v1.0.0.
## Using .NET for Apache Spark v1.0 in HDInsight
-While current HDI clusters will not be affected (i.e. they will still have the same version as before), newly created HDI clusters will carry this latest v1.0.0 version of .NET for Apache Spark. What this means if:
+While current HDI clusters won't be affected (that is, they'll still have the same version as before), newly created HDI clusters will carry this latest v1.0.0 version of .NET for Apache Spark. What this means if:
-- **You have an older HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), you will have to update the `Microsoft.Spark.Worker` version on your HDI cluster. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](#changing-net-for-apache-spark-version-on-hdinsight).
+- **You have an older HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), you'll have to update the `Microsoft.Spark.Worker` version on your HDI cluster. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](#changing-net-for-apache-spark-version-on-hdinsight).
If you don't want to update the current version of .NET for Apache Spark in your application, no further steps are necessary. -- **You have a new HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), no steps are needed to change the worker on HDI, however you will have to refer to the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand the steps needed to update your code and pipelines.
-If you don't want to change the current version of .NET for Apache Spark in your application, you would have to change the version on your HDI cluster from v1.0 (default on new clusters) to whichever version you are using. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](spark-dotnet-version-update.md#changing-net-for-apache-spark-version-on-hdinsight).
+- **You have a new HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), no steps are needed to change the worker on HDI, however you'll have to refer to the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand the steps needed to update your code and pipelines.
+If you don't want to change the current version of .NET for Apache Spark in your application, you need to change the version on your HDI cluster from v1.0 (default on new clusters) to whichever version you're using. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](spark-dotnet-version-update.md#changing-net-for-apache-spark-version-on-hdinsight).
## Changing .NET for Apache Spark version on HDInsight
If you don't want to change the current version of .NET for Apache Spark in your
2. Download [install-worker.sh](https://github.com/dotnet/spark/blob/master/deployment/install-worker.sh) script to install the worker binaries downloaded in Step 1 to all the worker nodes of your HDI cluster.
-3. Upload the above mentioned files to the Azure Storage account your cluster has access to. You can refer to [the .NET for Apache Spark HDI deployment article](/dotnet/spark/tutorials/hdinsight-deployment#upload-files-to-azure) for more details.
+3. Upload the above mentioned files to the Azure Storage account your cluster has access to. For more information, see [.NET for Apache Spark HDI deployment article](/dotnet/spark/tutorials/hdinsight-deployment#upload-files-to-azure) for more details.
-4. Run the `install-worker.sh` script on all worker nodes of your cluster, using Script actions. Refer to [the .NET for Apache Spark HDI deployment article](/dotnet/spark/tutorials/hdinsight-deployment#run-the-hdinsight-script-action) for more information.
+4. Run the `install-worker.sh` script on all worker nodes of your cluster, using Script actions. For more information, see [.NET for Apache Spark HDI deployment article](/dotnet/spark/tutorials/hdinsight-deployment#run-the-hdinsight-script-action).
### Update your application to use specific version
You can update your .NET for Apache Spark application to use a specific version
### Will my existing HDI cluster with version < 1.0.0 start failing with the new release?
-Existing HDI clusters will continue to have the same previous version for .NET for Apache Spark and your existing application (having previous version of Spark .NET) will not be affected.
+Existing HDI clusters will continue to have the same previous version for .NET for Apache Spark and your existing application (having previous version of Spark .NET) won't be affected.
## Next steps
-[Deploy your .NET for Apache Spark application on HDInsight](/dotnet/spark/tutorials/hdinsight-deployment)
+[Deploy your .NET for Apache Spark application on HDInsight](/dotnet/spark/tutorials/hdinsight-deployment)
healthcare-apis Iot Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-data-flow.md
Previously updated : 07/19/2022 Last updated : 07/22/2022
-# The MedTech service data flows
+# MedTech service data flow
-This article provides an overview of the MedTech service data flows. You'll learn about the different data processing stages within the MedTech service that transforms device data into Fast Healthcare Interoperability Resources (FHIR&#174;)-based [Observation](https://www.hl7.org/fhir/observation.html) resources.
+This article provides an overview of the MedTech service data flow. You'll learn about the different data processing stages within the MedTech service that transforms device data into Fast Healthcare Interoperability Resources (FHIR&#174;)-based [Observation](https://www.hl7.org/fhir/observation.html) resources.
-Data from health-related devices or medical devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. In this data flow, health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed or normalized per user-selected or user-created schema templates, so that the health data is simpler to process and can be grouped. Health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through FHIR destination mappings, and then saved or persisted on the FHIR service.
+Data from health-related devices or medical devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. Health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed, or normalized per a user-selected/user-created schema template called the device mapping. Normalized health data is simpler to process and can be grouped. In the next step, health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through FHIR destination mappings, and then saved or persisted on the FHIR service.
-This article goes into more depth about each step in the data flow. The next steps are [Deploy the MedTech service using the Azure portal](deploy-iot-connector-in-azure.md) by using Device mappings (the normalization step) and FHIR destination mappings (the transformation step).
+This article goes into more depth about each step in the data flow. The next steps are [Deploy the MedTech service using the Azure portal](deploy-iot-connector-in-azure.md) by using a device mapping (the normalization step) and a FHIR destination mapping (the transformation step).
-This next section of the article describes the stages that IoMT (Internet of Medical Things) data goes through once received from an event hub and into the MedTech service.
+This next section of the article describes the stages that IoMT (Internet of Medical Things) device data goes through as it processed through the MedTech service.
## Ingest
-Ingest is the first stage where device data is received into the MedTech service. The ingestion endpoint for device data is hosted on an [Azure Event Hubs](../../event-hubs/index.yml). Azure Event Hubs platform supports high scale and throughput with ability to receive and process millions of messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device data gets processed.
+Ingest is the first stage where device data is received into the MedTech service. The ingestion endpoint for device data is hosted on an [Azure Event Hubs](../../event-hubs/index.yml). The Azure Event Hubs platform supports high scale and throughput with ability to receive and process millions of messages per second. It also enables the MedTech service to consume messages asynchronously, removing the need for devices to wait while device data gets processed.
> [!NOTE] > JSON is the only supported format at this time for device data. ## Normalize
-Normalize is the next stage where device data is retrieved from the above event hub and processed using the Device mappings. This mapping process results in transforming device data into a normalized schema.
+Normalize is the next stage where device data is retrieved from the above event hub and processed using the device mapping. This mapping process results in transforming device data into a normalized schema.
The normalization process not only simplifies data processing at later stages but also provides the capability to project one input message into multiple normalized messages. For instance, a device could send multiple vital signs for body temperature, pulse rate, blood pressure, and respiration rate in a single message. This input message would create four separate FHIR resources. Each resource would represent different vital sign, with the input message projected into four different normalized messages.
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-metrics-diagnostics-export.md
Title: Configure the MedTech service Diagnostic settings for metrics export - Azure Health Data Services
-description: This article explains how to configure the MedTech service Diagnostic settings for metrics exporting.
+ Title: How to configure the MedTech service diagnostic settings for metrics export - Azure Health Data Services
+description: This article explains how to configure the MedTech service diagnostic settings for metrics exporting.
Previously updated : 02/16/2022 Last updated : 07/22/2022
-# Configure diagnostic setting for the MedTech service metrics exporting
+# How to configure diagnostic settings for exporting the MedTech service metrics
-In this article, you'll learn how to configure the diagnostic setting for MedTech service to export metrics to different destinations for audit, analysis, or backup.
+In this article, you'll learn how to configure the diagnostic setting for the MedTech service to export metrics to different destinations (for example: to Azure storage or an event hub) for audit, analysis, or backup.
## Create diagnostic setting for the MedTech service 1. To enable metrics export for the MedTech service, select **MedTech service** in your workspace.
In this article, you'll learn how to configure the diagnostic setting for MedTec
For more information about how to work with diagnostics logs, see the [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md). ## Conclusion
-Having access to metrics is essential for monitoring and troubleshooting. MedTech service allows you to do these actions through the export of metrics.
+Having access to the MedTech service metrics is essential for monitoring and troubleshooting. The MedTech service allows you to do these actions through the export of metrics.
## Next steps
To view the frequently asked questions (FAQs) about the MedTech service, see
>[!div class="nextstepaction"] >[MedTech service FAQs](iot-connector-faqs.md)
-(FHIR&#174;) is a registered trademark of HL7 and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-transparent-gateway.md
For a device to act as a gateway, it needs to securely connect to its downstream
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
-A downstream device can be any application or platform that has an identity created with the [Azure IoT Hub](../iot-hub/index.yml) cloud service. These applications often use the [Azure IoT device SDK](../iot-hub/iot-hub-devguide-sdks.md). A downstream device could even be an application running on the IoT Edge gateway device itself. However, an IoT Edge device cannot be downstream of an IoT Edge gateway.
+A downstream device can be any application or platform that has an identity created with the [Azure IoT Hub](../iot-hub/index.yml) cloud service. These applications often use the [Azure IoT device SDK](../iot-hub/iot-hub-devguide-sdks.md). A downstream device could even be an application running on the IoT Edge gateway device itself. However, an IoT Edge device can't be downstream of an IoT Edge gateway.
:::moniker-end <!-- end 1.1 -->
The following steps walk you through the process of creating the certificates an
## Prerequisites
+# [IoT Edge](#tab/iotedge)
+ A Linux or Windows device with IoT Edge installed.
-If you do not have a device ready, you can create one in an Azure virtual machine. Follow the steps in [Deploy your first IoT Edge module to a virtual Linux device](quickstart-linux.md) to create an IoT Hub, create a virtual machine, and configure the IoT Edge runtime.
+If you don't have a device ready, you can create one in an Azure virtual machine. Follow the steps in [Deploy your first IoT Edge module to a virtual Linux device](quickstart-linux.md) to create an IoT Hub, create a virtual machine, and configure the IoT Edge runtime.
+
+# [IoT Edge for Linux on Windows](#tab/eflow)
+
+>[!WARNING]
+> Because the IoT Edge for Linux on Windows (EFLOW) virtual machine needs to be accessible from external devices, ensure to deploy EFLOW with an _external_ virtual switch. For more information about EFLOW networking configurations, see [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md).
+
+A Windows device with IoT Edge for Linux on Windows installed.
+
+If you don't have a device ready, you should create one before continuing with this guide. Follow the steps in [Create and provision an IoT Edge for Linux on Windows device using symmetric keys](./how-to-provision-single-device-linux-on-windows-symmetric.md) to create an IoT Hub, create an EFLOW virtual machine, and configure the IoT Edge runtime.
++ ## Set up the device CA certificate
Have the following files ready:
For production scenarios, you should generate these files with your own certificate authority. For development and test scenarios, you can use demo certificates.
+### Create demo certificates
+ If you don't have your own certificate authority and want to use demo certificates, follow the instructions in [Create demo certificates to test IoT Edge device features](how-to-create-test-certificates.md) to create your files. On that page, you need to take the following steps:
- 1. To start, set up the scripts for generating certificates on your device.
- 2. Create a root CA certificate. At the end of those instructions, you'll have a root CA certificate file:
- * `<path>/certs/azure-iot-test-only.root.ca.cert.pem`.
- 3. Create IoT Edge device CA certificates. At the end of those instructions, you'll have a device CA certificate and its private key:
- * `<path>/certs/iot-edge-device-ca-<cert name>-full-chain.cert.pem` and
- * `<path>/private/iot-edge-device-ca-<cert name>.key.pem`
+1. To start, set up the scripts for generating certificates on your device.
+1. Create a root CA certificate. At the end of those instructions, you'll have a root CA certificate file `<path>/certs/azure-iot-test-only.root.ca.cert.pem`.
+1. Create IoT Edge device CA certificates. At the end of those instructions, you'll have a device CA certificate `<path>/certs/iot-edge-device-ca-<cert name>-full-chain.cert.pem` its private key `<path>/private/iot-edge-device-ca-<cert name>.key.pem`.
+
+### Copy certificates to device
+
+# [IoT Edge](#tab/iotedge)
+
+If you created the certificates on a different machine, copy them over to your IoT Edge device then proceed with the next steps. You can use a USB drive, a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/). Choose one of these methods that best matches your scenario.
+
+# [IoT Edge for Linux on Windows](#tab/eflow)
+
+Now, you need to copy the certificates to the Azure IoT Edge for Linux on Windows virtual machine.
+
+1. Open an elevated _PowerShell_ session by starting with **Run as Administrator**.
+
+ Connect to the EFLOW virtual machine.
+
+ ```powershell
+ Connect-EflowVm
+ ```
+
+1. Create the certificates directory. You can select any writeable directory. For this tutorial, we'll use the _iotedge-user_ home folder.
+
+ ```bash
+ cd ~
+ mkdir certs
+ cd certs
+ mkdir certs
+ mkdir private
+ ```
+
+1. Exit the EFLOW VM connection.
+
+ ```bash
+ exit
+ ```
+
+1. Copy the certificates to the EFLOW virtual machine.
-If you created the certificates on a different machine, copy them over to your IoT Edge device then proceed with the next steps.
+ ```powershell
+ # Copy the IoT Edge device CA certificates
+ Copy-EflowVMFile -fromFile <path>\certs\iot-edge-device-ca-<cert name>-full-chain.cert.pem -toFile /home/iotedge-user/certs/certs/iot-edge-device-ca-<cert name>-full-chain.cert.pem -pushFile
+ Copy-EflowVMFile -fromFile <path>\private\iot-edge-device-ca-<cert name>.key.pem -toFile /home/iotedge-user/certs/private/iot-edge-device-ca-<cert name>.key.pem -pushFile
+
+ # Copy the root CA certificate
+ Copy-EflowVMFile -fromFile <path>\certs\azure-iot-test-only.root.ca.cert.pem -toFile /home/iotedge-user/certs/certs/azure-iot-test-only.root.ca.cert.pem -pushFile
+ ```
+
+1. Invoke the following commands on the EFLOW VM to grant iotedge permissions to the certificate files since `Copy-EflowVMFile` copies files with root only access permissions.
+
+ ```powershell
+ Invoke-EflowVmCommand "sudo chown -R iotedge /home/iotedge-user/certs/"
+ Invoke-EflowVmCommand "sudo chmod 0644 /home/iotedge-user/certs/"
+ ```
+
+-
+
+### Configure certificates on device
<!-- 1.1 --> :::moniker range="iotedge-2018-06"
If you created the certificates on a different machine, copy them over to your I
* Windows: `C:\ProgramData\iotedge\config.yaml` * Linux: `/etc/iotedge/config.yaml`
+ * IoT Edge for Linux on Windows: `/etc/iotedge/config.yaml`
+
+ >[!TIP]
+ > If you are using IoT Edge for Linux on Windows (EFLOW) you'll have to connect to the EFLOW virtual machine and change the file inside the VM. You can connect to the EFLOW VM using the PowerShell cmdlet `Connect-EflowVm` and then use your preferred editor.
1. Find the **Certificate settings** section of the file. Uncomment the four lines starting with **certificates:** and provide the file URIs to your three files as values for the following properties: * **device_ca_cert**: device CA certificate * **device_ca_pk**: device CA private key * **trusted_ca_certs**: root CA certificate
- Make sure there is no preceding whitespace on the **certificates:** line, and that the other lines are indented by two spaces.
+ Make sure there's no preceding whitespace on the **certificates:** line, and that the other lines are indented by two spaces.
1. Save and close the file. 1. Restart IoT Edge. * Windows: `Restart-Service iotedge` * Linux: `sudo systemctl restart iotedge`
+ * IoT Edge for Linux on Windows: `sudo systemctl restart iotedge`
+ :::moniker-end <!-- end 1.1 --> <!-- iotedge-2020-11 --> :::moniker range=">=iotedge-2020-11"
-1. On your IoT Edge device, open the config file: `/etc/aziot/config.toml`
+1. On your IoT Edge device, open the config file: `/etc/aziot/config.toml`. If you're using IoT Edge for Linux on Windows, you'll have to connect to the EFLOW virtual machine using the `Connect-EflowVm` PowerShell cmdlet.
>[!TIP] >If the config file doesn't exist on your device yet, then use `/etc/aziot/config.toml.edge.template` as a template to create one.
To deploy the IoT Edge hub module and configure it with routes to handle incomin
Standard IoT Edge devices don't need any inbound connectivity to function, because all communication with IoT Hub is done through outbound connections. Gateway devices are different because they need to receive messages from their downstream devices. If a firewall is between the downstream devices and the gateway device, then communication needs to be possible through the firewall as well.
-For a gateway scenario to work, at least one of the IoT Edge hub's supported protocols must be open for inbound traffic from downstream devices. The supported protocols are MQTT, AMQP, HTTPS, MQTT over WebSockets, and AMQP over WebSockets.
+# [IoT Edge](#tab/iotedge)
+
+For a gateway scenario to work, at least one of the IoT Edge Hub's supported protocols must be open for inbound traffic from downstream devices. The supported protocols are MQTT, AMQP, HTTPS, MQTT over WebSockets, and AMQP over WebSockets.
| Port | Protocol | | - | -- |
For a gateway scenario to work, at least one of the IoT Edge hub's supported pro
| 5671 | AMQP | | 443 | HTTPS <br> MQTT+WS <br> AMQP+WS |
+# [IoT Edge for Linux on Windows](#tab/eflow)
+
+For a gateway scenario to work, at least one of the IoT Edge Hub's supported protocols must be open for inbound traffic from downstream devices. The supported protocols are MQTT, AMQP, HTTPS, MQTT over WebSockets, and AMQP over WebSockets.
+
+| Port | Protocol |
+| - | -- |
+| 8883 | MQTT |
+| 5671 | AMQP |
+| 443 | HTTPS <br> MQTT+WS <br> AMQP+WS |
+
+Finally, you must open the EFLOW virtual machine ports. You can open the three ports mentioned above using the following PowerShell cmdlets.
+
+ ```powershell
+ # Open MQTT port
+ Invoke-EflowVmCommand "sudo iptables -A INPUT -p tcp --dport 8883 -j ACCEPT"
+
+ # Open AMQP port
+ Invoke-EflowVmCommand "sudo iptables -A INPUT -p tcp --dport 5671 -j ACCEPT"
+
+ # Open HTTPS/MQTT+WS/AMQP+WS port
+ Invoke-EflowVmCommand "sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT"
+
+ # Save the iptables rules
+ Invoke-EflowVmCommand "sudo iptables-save | sudo tee /etc/systemd/scripts/ip4save"
+ ```
++ ## Next steps Now that you have an IoT Edge device set up as a transparent gateway, you need to configure your downstream devices to trust the gateway and send messages to it. Continue on to [Authenticate a downstream device to Azure IoT Hub](how-to-authenticate-downstream-device.md) for the next steps in setting up your transparent gateway scenario.
iot-edge Production Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/production-checklist.md
description: Learn how to take your Azure IoT Edge solution from development to
Previously updated : 03/01/2021 Last updated : 07/22/2022
If your networking setup requires that you explicitly permit connections made fr
* **IoT Edge hub** opens a single persistent AMQP connection or multiple MQTT connections to IoT Hub, possibly over WebSockets. * **IoT Edge service** makes intermittent HTTPS calls to IoT Hub.
-In all three cases, the fully-qualified domain name (FQDN) would match the pattern `\*.azure-devices.net`.
+In all three cases, the fully qualified domain name (FQDN) would match the pattern `\*.azure-devices.net`.
Additionally, the **Container engine** makes calls to container registries over HTTPS. To retrieve the IoT Edge runtime container images, the FQDN is `mcr.microsoft.com`. The container engine connects to other registries as configured in the deployment.
This checklist is a starting point for firewall rules:
<sup>1</sup>Open port 8883 for secure MQTT or port 5671 for secure AMQP. If you can only make connections via port 443 then either of these protocols can be run through a WebSocket tunnel.
-Since the IP address of an IoT hub can change without notice, always use the FQDN to allow-list configuration. To learn more, see [Understanding the IP address of your IoT hub](../iot-hub/iot-hub-understand-ip-address.md).
+Since the IP address of an IoT hub can change without notice, always use the FQDN to allowlist configuration. To learn more, see [Understanding the IP address of your IoT Hub](../iot-hub/iot-hub-understand-ip-address.md).
Some of these firewall rules are inherited from Azure Container Registry. For more information, see [Configure rules to access an Azure container registry behind a firewall](../container-registry/container-registry-firewall-access-rules.md).
+You can enable dedicated data endpoints in your Azure Container registry to avoid wildcard allowlisting of the *\*.blob.core.windows.net* FQDN. For more information, see [Enable dedicated data endpoints](/azure/container-registry/container-registry-firewall-access-rules#enable-dedicated-data-endpoints).
+ > [!NOTE] > To provide a consistent FQDN between the REST and data endpoints, beginning **June 15, 2020** the Microsoft Container Registry data endpoint will change from `*.cdn.mscr.io` to `*.data.mcr.microsoft.com` > For more information, see [Microsoft Container Registry client firewall rules configuration](https://github.com/microsoft/containerregistry/blob/master/client-firewall-rules.md)
iot-hub Tutorial Use Metrics And Diags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-use-metrics-and-diags.md
Title: Tutorial - Set up and use metrics and logs with an Azure IoT hub
-description: Tutorial - Learn how to set up and use metrics and logs with an Azure IoT hub. This will provide data to analyze to help diagnose problems your hub may be having.
+description: Tutorial - Learn how to set up and use metrics and logs with an Azure IoT hub to provide data to analyze and diagnose problems your hub may be having.
Previously updated : 10/19/2021 Last updated : 07/21/2022 #Customer intent: As a developer, I want to know how to set up and check metrics and logs, to help me troubleshoot when there is a problem with an Azure IoT hub.
# Tutorial: Set up and use metrics and logs with an IoT hub
-You can use Azure Monitor to collect metrics and logs for your IoT hub that can help you monitor the operation of your solution and troubleshoot problems when they occur. In this article, you'll see how to create charts based on metrics, how to create alerts that trigger on metrics, how to send IoT Hub operations and errors to Azure Monitor Logs, and how to check the logs for errors.
+Use Azure Monitor to collect metrics and logs from your IoT hub to monitor the operation of your solution and troubleshoot problems when they occur. In this tutorial, you'll learn how to create charts based on metrics, how to create alerts that trigger on metrics, how to send IoT Hub operations and errors to Azure Monitor Logs, and how to check the logs for errors.
-This tutorial uses the Azure sample from the [.NET Send telemetry quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) to send messages to the IoT hub. You can always use a device or another sample to send messages, but you may have to modify a few steps accordingly.
+This tutorial uses the Azure sample from the [.NET send telemetry quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) to send messages to the IoT hub. You can always use a device or another sample to send messages, but you may have to modify a few steps accordingly.
Some familiarity with Azure Monitor concepts might be helpful before you begin this tutorial. To learn more, see [Monitor IoT Hub](monitor-iot-hub.md). To learn more about the metrics and resource logs emitted by IoT Hub, see [Monitoring data reference](monitor-iot-hub-reference.md).
In this tutorial, you perform the following tasks:
## Prerequisites -- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-- You need the .NET Core SDK 2.1 or greater on your development machine. You can download the .NET Core SDK for multiple platforms from [.NET](https://dotnet.microsoft.com/download).
+* .NET Core SDK 2.1 or greater on your development machine. You can download the .NET Core SDK for multiple platforms from [.NET](https://dotnet.microsoft.com/download).
You can verify the current version of C# on your development machine using the following command:
In this tutorial, you perform the following tasks:
dotnet --version ``` -- An email account capable of receiving mail.
+* An email account capable of receiving mail.
-- Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
+* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub).
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)] ## Set up resources
-For this tutorial, you need an IoT hub, a Log Analytics workspace, and a simulated IoT device. These resources can be created using Azure CLI or Azure PowerShell. Use the same resource group and location for all of the resources. Then, when you've finished the tutorial, you can remove everything in one step by deleting the resource group.
+For this tutorial, you need an IoT hub, a Log Analytics workspace, and a simulated IoT device. These resources can be created using the Azure portal, Azure CLI, or PowerShell. Use the same resource group and location for all of the resources. Then, when you've finished the tutorial, you can remove everything in one step by deleting the resource group.
-Here are the required steps.
+For this tutorial, we've provided a CLI script that performs the following steps:
1. Create a [resource group](../azure-resource-manager/management/overview.md).
Here are the required steps.
### Set up resources using Azure CLI
-Copy and paste this script into Cloud Shell. Assuming you are already logged in, it runs the script one line at a time. Some of the commands may take some time to execute. The new resources are created in the resource group *ContosoResources*.
+Copy and paste the following commands into Cloud Shell or a local command line instance that has the Azure CLI installed. Some of the commands may take some time to execute. The new resources are created in the resource group *ContosoResources*.
The name for some resources must be unique across Azure. The script generates a random value with the `$RANDOM` function and stores it in a variable. For these resources, the script appends this random value to a base name for the resource, making the resource name unique.
-Only one free IoT hub is permitted per subscription. If you already have a free IoT hub in your subscription, either delete it before running the script or modify the script to use your free IoT hub or an IoT Hub that uses the standard or basic tier.
-
-The script prints the name of the IoT hub, the name of the Log Analytics workspace, and the connection string for the device it registers. Be sure to note these down as you'll need them later in this article.
+Set the values for the resource names that don't have to be globally unique.
```azurecli-interactive-
-# This is the IOT Extension for Azure CLI.
-# You only need to install this the first time.
-# You need it to create the device identity.
-az extension add --name azure-iot
-
-# Set the values for the resource names that don't have to be globally unique.
-# The resources that have to have unique names are named in the script below
-# with a random number concatenated to the name so you can probably just
-# run this script, and it will work with no conflicts.
location=westus resourceGroup=ContosoResources iotDeviceName=Contoso-Test-Device
-randomValue=$RANDOM
+```
-# Create the resource group to be used
-# for all the resources for this tutorial.
-az group create --name $resourceGroup \
- --location $location
+Set the values for the resource names that have to be unique. These names have a random number concatenated to the end.
-# The IoT hub name must be globally unique, so add a random number to the end.
+```azurecli-interactive
+randomValue=$RANDOM
iotHubName=ContosoTestHub$randomValue echo "IoT hub name = " $iotHubName-
-# Create the IoT hub in the Free tier. Partition count must be 2.
-az iot hub create --name $iotHubName \
- --resource-group $resourceGroup \
- --partition-count 2 \
- --sku F1 --location $location
-
-# The Log Analytics workspace name must be globally unique, so add a random number to the end.
workspaceName=contoso-la-workspace$randomValue echo "Log Analytics workspace name = " $workspaceName
+```
+Create the resource group to be used for all the resources for this tutorial.
-# Create the Log Analytics workspace
-az monitor log-analytics workspace create --resource-group $resourceGroup \
- --workspace-name $workspaceName --location $location
+```azurecli-interactive
+az group create --name $resourceGroup --location $location
+```
+
+Create the IoT hub in the free tier. Each subscription can only have one free IoT hub. If you already have a free hub, change the `--sku` value to `B1` (basic) or `S1` (standard).
-# Create the IoT device identity to be used for testing.
-az iot hub device-identity create --device-id $iotDeviceName \
- --hub-name $iotHubName
+```azurecli-interactive
+az iot hub create --name $iotHubName --resource-group $resourceGroup --partition-count 2 --sku F1 --location $location
+```
-# Retrieve the primary connection string for the device identity, then copy it to
-# Notepad. You need this to run the device simulation during the testing phase.
-az iot hub device-identity show-connection-string --device-id $iotDeviceName \
- --hub-name $iotHubName
+Create the Log Analytics workspace
+```azurecli-interactive
+az monitor log-analytics workspace create --resource-group $resourceGroup --workspace-name $workspaceName --location $location
```
->[!NOTE]
->When creating the device identity, you may get the following error: *No keys found for policy iothubowner of IoT Hub ContosoTestHub*. To fix this error, update the Azure CLI IoT Extension and then run the last two commands in the script again.
->
->Here is the command to update the extension. Run this command in your Cloud Shell instance.
->
->```cli
->az extension update --name azure-iot
->```
+Create the IoT device identity to be used for testing.
+
+```azurecli-interactive
+az iot hub device-identity create --device-id $iotDeviceName --hub-name $iotHubName
+```
+
+Retrieve the primary connection string for the device identity, then copy it locally. You need this connection string to run the device simulation during the testing phase.
+
+```azurecli-interactive
+az iot hub device-identity show-connection-string --device-id $iotDeviceName --hub-name $iotHubName
+```
## Collect logs for connections and device telemetry
-IoT Hub emits resource logs for several categories of operation; however, for you to view these logs you must create a diagnostic setting to send them to a destination. One such destination is Azure Monitor Logs, which are collected in a Log Analytics workspace. IoT Hub resource logs are grouped into different categories. You can select which categories you want sent to Azure Monitor Logs in the diagnostic setting. In this article, we'll collect logs for operations and errors that occur having to do with connections and device telemetry. For a full list of the categories supported for IoT Hub, see [IoT Hub resource logs](monitor-iot-hub-reference.md#resource-logs).
+IoT Hub emits resource logs for several categories of operation. To view these logs, you must create a diagnostic setting to send them to a destination. One such destination is Azure Monitor Logs, which are collected in a Log Analytics workspace. IoT Hub resource logs are grouped into different categories. You can select which categories you want sent to Azure Monitor Logs in the diagnostic setting. In this article, we'll collect logs for operations and errors having to do with connections and device telemetry. For a full list of the categories supported for IoT Hub, see [IoT Hub resource logs](monitor-iot-hub-reference.md#resource-logs).
To create a diagnostic setting to send IoT Hub resource logs to Azure Monitor Logs, follow these steps:
-1. First, if you're not already on your hub in the portal, select **Resource groups** and select the resource group ContosoResources. Select your IoT hub from the list of resources displayed.
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub. If you used the CLI commands to create your resources, then your IoT hub is in the resource group **ContosoResources**.
-1. Look for the **Monitoring** section in the IoT Hub blade. Select **Diagnostic settings**. Then select **Add diagnostic setting**.
+1. Select **Diagnostic settings** from the **Monitoring** section of the navigation menu. Then select **Add diagnostic setting**.
:::image type="content" source="media/tutorial-use-metrics-and-diags/open-diagnostic-settings.png" alt-text="Screenshot that highlights Diagnostic settings in the Monitoring section.":::
-1. On the **Diagnostics setting** pane, give your setting a descriptive name, such as "Send connections and telemetry to logs".
+1. On the **Diagnostics setting** page, provide the following details:
-1. Under **Category details**, select **Connections** and **Device Telemetry**.
-
-1. Under **Destination details**, select **Send to Log Analytics**, then use the Log Analytics workspace picker to select the workspace you noted previously. When you're finished, the diagnostic setting should look similar to the following screenshot:
+ | Parameter | Value |
+ | | -- |
+ | **Diagnostic setting name** | Give your setting a descriptive name, such as "Send connections and telemetry to logs". |
+ | **Logs** | Select **Connections** and **Device Telemetry** from the **Categories** list. |
+ | **Destination details** | Select **Send to Log Analytics workspace**, then use the Log Analytics workspace picker to select the workspace you noted previously.
:::image type="content" source="media/tutorial-use-metrics-and-diags/add-diagnostic-setting.png" alt-text="Screenshot showing the final diagnostic log settings.":::
To create a diagnostic setting to send IoT Hub resource logs to Azure Monitor Lo
Now we'll use metrics explorer to create a chart that displays metrics you want to track. You'll pin this chart to your default dashboard in the Azure portal.
-1. On the left pane of your IoT hub, select **Metrics** in the **Monitoring** section.
+1. In your IoT hub menu, select **Metrics** from the **Monitoring** section.
1. At the top of the screen, select **Last 24 hours (Automatic)**. In the dropdown that appears, select **Last 4 hours** for **Time range**, set **Time granularity** to **1 minute**, and select **Local** for **Show time as**. Select **Apply** to save these settings. The setting should now say **Local Time: Last 4 hours (1 minute)**. :::image type="content" source="media/tutorial-use-metrics-and-diags/metrics-select-time-range.png" alt-text="Screenshot showing the metrics time settings.":::
-1. On the chart, there is a partial metric setting displayed scoped to your IoT hub. Leave the **Scope** and **Metric Namespace** values at their defaults. Select the **Metric** setting and type "Telemetry", then select **Telemetry messages sent** from the dropdown. **Aggregation** will be automatically set to **Sum**. Notice that the title of your chart also changes.
+1. On the chart, there's a partial metric setting displayed scoped to your IoT hub. Leave the **Scope** and **Metric Namespace** values at their defaults. Select the **Metric** setting and type "Telemetry", then select **Telemetry messages sent** from the dropdown. **Aggregation** will be automatically set to **Sum**. Notice that the title of your chart also changes.
:::image type="content" source="media/tutorial-use-metrics-and-diags/metrics-telemetry-messages-sent.png" alt-text="Screenshot that shows adding Telemetry messages sent metric to chart.":::
-1. Now select **Add metric** to add another metric to the chart. Under **Metric**, select **Total number of messages used**. **Aggregation** will be automatically set to **Avg**. Notice again that the title of the chart has changed to include this metric.
+1. Now select **Add metric** to add another metric to the chart. Under **Metric**, select **Total number of messages used**. For **Aggregation**, select **Avg**. Notice again that the title of the chart has changed to include this metric.
Now your screen shows the minimized metric for *Telemetry messages sent*, plus the new metric for *Total number of messages used*. :::image type="content" source="media/tutorial-use-metrics-and-diags/metrics-total-number-of-messages-used.png" alt-text="Screenshot that shows adding Total number of messages used metric to chart.":::
-1. In the upper right of the chart, select **Pin to dashboard**.
+1. In the upper right of the chart, select **Save to dashboard** and choose **Pin to dashboard** from the dropdown list.
- :::image type="content" source="media/tutorial-use-metrics-and-diags/metrics-total-number-of-messages-used-pin.png" alt-text="Screenshot that highlights the Pin to dashboard button.":::
+ :::image type="content" source="media/tutorial-use-metrics-and-diags/metrics-total-number-of-messages-used-pin.png" alt-text="Screenshot that highlights the Save to dashboard button.":::
-1. On the **Pin to dashboard** pane, select the **Existing** tab. Select **Private** and then select **Dashboard** from the Dashboard dropdown. Finally, select **Pin** to pin the chart to your default dashboard in Azure portal. If you don't pin your chart to a dashboard, your settings are not retained when you exit metric explorer.
+1. On the **Pin to dashboard** pane, select the **Existing** tab. Select **Private** and then select **Dashboard** from the Dashboard dropdown. Finally, select **Pin** to pin the chart to your default dashboard in Azure portal. If you don't pin your chart to a dashboard, your settings aren't retained when you exit metric explorer.
:::image type="content" source="media/tutorial-use-metrics-and-diags/pin-to-dashboard.png" alt-text="Screenshot that shows settings for Pin to dashboard."::: ## Set up metric alerts
-Now we'll set up alerts to trigger on two metrics *Telemetry messages sent* and *Total number of messages used*.
+Now we'll set up alerts to trigger on two metrics: *Telemetry messages sent* and *Total number of messages used*.
-*Telemetry messages sent* is a good metric to monitor to track message throughput and avoid being throttled. For an IoT Hub in the free tier, the throttling limit is 100 messages/sec. With a single device, we won't be able to achieve that kind of throughput, so instead, we'll set up the alert to trigger if the number of messages exceeds 1000 in a 5-minute period. In production, you can set the signal to a more significant value based on the tier, edition, and number of units of your IoT hub.
+**Telemetry messages sent** is a good metric to track message throughput and avoid being throttled. For an IoT Hub in the free tier, the throttling limit is 100 messages/sec. With a single device, we won't be able to achieve that kind of throughput, so instead, we'll set up the alert to trigger if the number of messages exceeds 1000 in a 5-minute period. In production, you can set the signal to a more significant value based on the tier, edition, and number of units of your IoT hub.
-*Total number of messages used* tracks the daily number of messages used. This metric resets every day at 00:00 UTC. If you exceed your daily quota past a certain threshold, your IoT Hub will no longer accept messages. For an IoT Hub in the free tier, the daily message quota is 8000. We'll set up the alert to trigger if the total number of messages exceeds 4000, 50% of the quota. In practice, you'd probably set this percentage to a higher value. The daily quota value is dependent on the tier, edition, and number of units of your IoT hub.
+**Total number of messages used** tracks the daily number of messages used. This metric resets every day at 00:00 UTC. If you exceed your daily quota past a certain threshold, your IoT Hub will no longer accept messages. For an IoT Hub in the free tier, the daily message quota is 8000. We'll set up the alert to trigger if the total number of messages exceeds 4000, 50% of the quota. In practice, you'd probably set this percentage to a higher value. The daily quota value is dependent on the tier, edition, and number of units of your IoT hub.
For more information about quota and throttling limits with IoT Hub, see [Quotas and throttling](iot-hub-devguide-quotas-throttling.md). To set up metric alerts:
-1. Go to your IoT hub in Azure portal.
-
-1. Under **Monitoring**, select **Alerts**. Then select **New alert rule**. The **Create alert rule** pane opens.
+1. In your IoT hub menu, select **Alerts** from the **Monitoring** section.
- :::image type="content" source="media/tutorial-use-metrics-and-diags/create-alert-rule-pane.png" alt-text="Screenshot showing the Create alert rule pane.":::
+1. Select **Create alert rule**.
- On the **Create alert rule** pane, there are four sections:
+ On the **Create alert rule** pane, there are four sections:
- * **Scope** is already set to your IoT hub, so we'll leave this section alone.
- * **Condition** sets the signal and conditions that will trigger the alert.
- * **Actions** configures what happens when the alert triggers.
- * **Alert rule details** lets you set a name and a description for the alert.
+ * **Scope** is already set to your IoT hub, so we'll leave this section alone.
+ * **Condition** sets the signal and conditions that will trigger the alert.
+ * **Actions** configures what happens when the alert triggers.
+ * **Details** lets you set a name and a description for the alert.
1. First configure the condition that the alert will trigger on.
- 1. Under **Condition**, select **Add condition**. On the **Configure signal logic** pane, type "telemetry" in the search box and select **Telemetry messages sent**.
-
- :::image type="content" source="media/tutorial-use-metrics-and-diags/configure-signal-logic-telemetry-messages-sent.png" alt-text="Screenshot showing selecting the metric.":::
-
- 1. On the **Configure signal logic** pane, set or confirm the following fields under **Alert logic** (you can ignore the chart):
-
- **Threshold**: *Static*.
+ 1. The **Condition** tab opens with the **Select a signal** pane open. Type "telemetry" in the signal name search box and select **Telemetry messages sent**.
- **Operator**: *Greater than*.
+ :::image type="content" source="media/tutorial-use-metrics-and-diags/configure-signal-logic-telemetry-messages-sent.png" alt-text="Screenshot showing selecting the metric.":::
- **Aggregation type**: *Total*.
+ 1. On the **Configure signal logic** pane, set or confirm the following fields under **Alert logic** (you can ignore the chart):
- **Threshold value**: 1000.
+ | Parameter | Value |
+ | | -- |
+ | **Threshold** | *Static* |
+ | **Operator** | *Greater than* |
+ | **Aggregation type** | *Total* |
+ | **Threshold value** | *1000* |
+ | **Unit** | *Count* |
+ | **Aggregation granularity (Period)** | *5 minutes* |
+ | **Frequency of evaluation** | *Every 1 Minute* |
- **Aggregation granularity (Period)**: *5 minutes*.
+ :::image type="content" source="media/tutorial-use-metrics-and-diags/configure-signal-logic-set-conditions.png" alt-text="Screenshot showing alert conditions settings.":::
- **Frequency of evaluation**: *Every 1 Minute*
+ These settings set the signal to total the number of messages over a period of 5 minutes. This total will be evaluated every minute, and, if the total for the preceding 5 minutes exceeds 1000 messages, the alert will trigger.
- :::image type="content" source="media/tutorial-use-metrics-and-diags/configure-signal-logic-set-conditions.png" alt-text="Screenshot showing alert conditions settings.":::
+ Select **Done** to save the signal logic.
- These settings set the signal to total the number of messages over a period of 5 minutes. This total will be evaluated every minute, and, if the total for the preceding 5 minutes exceeds 1000 messages, the alert will trigger.
+1. Select **Next: Actions** to configure the action for the alert.
- Select **Done** to save the signal logic.
+ 1. Select **Create action group**.
-1. Now configure the action for the alert.
+ 1. On the **Basics** tab on the **Create action group** pane, give your action group a name and a display name.
- 1. Back on the **Create alert rule** pane, under **Actions**, select **Add action groups**. On the **Select an action group to attach to this alert rule** pane, select **Create action group**.
-
- 1. Under the **Basics** tab on the **Create action group** pane, give your action group a name and a display name.
-
- :::image type="content" source="media/tutorial-use-metrics-and-diags/create-action-group-basics.png" alt-text="Screenshot showing Basics tab of Create action group pane.":::
+ :::image type="content" source="media/tutorial-use-metrics-and-diags/create-action-group-basics.png" alt-text="Screenshot showing Basics tab of Create action group pane.":::
1. Select the **Notifications** tab. For **Notification type**, select **Email/SMS message/Push/Voice** from the dropdown. The **Email/SMS message/Push/Voice** pane opens.
To set up metric alerts:
:::image type="content" source="media/tutorial-use-metrics-and-diags/create-action-group-notification-complete.png" alt-text="Screenshot showing completed notifications pane.":::
- 1. (Optional) If you select the **Actions** tab, and then select the **Action type** dropdown, you can see the kinds of actions that you can trigger with an alert. For this article, we'll only use notifications, so you can ignore the settings under this tab.
+ 1. (Optional) On the action group **Actions** tab, the **Action type** dropdown lists the kinds of actions that you can trigger with an alert. For this article, we'll only use notifications, so you can ignore the settings under this tab.
:::image type="content" source="media/tutorial-use-metrics-and-diags/action-types.png" alt-text="Screenshot showing action types available on the Actions pane."::: 1. Select the **Review and Create** tab, verify your settings, and select **Create**.
- :::image type="content" source="media/tutorial-use-metrics-and-diags/create-action-group-review-and-create.png" alt-text="Screenshot showing Review and Create pane.":::
-
- 1. Back on the **Create alert rule** pane, notice that your new action group has been added to the actions for the alert.
-
-1. Finally configure the alert rule details and save the alert rule.
-
- 1. On the **Create alert rule** pane, under Alert rule details, enter a name and a description for your alert; for example, "Alert if more than 1000 messages over 5 minutes". Make sure that **Enable alert rule upon creation** is checked. Your completed alert rule will look similar to this screenshot.
-
- :::image type="content" source="media/tutorial-use-metrics-and-diags/create-alert-rule-final.png" alt-text="Screenshot showing completed Create alert rule pane.":::
-
- 1. Select **Create alert rule** to save your new rule.
-
-1. Now set up another alert for the *Total number of messages used*. This metric is useful if you want to send an alert when the number of messages used is approaching the daily quota for the IoT hub, at which point, the IoT hub will start rejecting messages. Follow the steps you did before, with the following differences.
-
- * For the signal on the **Configure signal logic** pane, select **Total number of messages used**.
+ 1. Back on the alert rule **Actions** tab, notice that your new action group has been added to the actions for the alert.
- * On the **Configure signal logic** pane, set or confirm the following fields (you can ignore the chart):
+1. Select **Next: Details** to configure the alert rule details and save the alert rule.
- **Threshold**: *Static*.
+ 1. On the **Details** tab, provide a name and a description for your alert; for example, "Alert if more than 1000 messages over 5 minutes".
- **Operator**: *Greater than*.
+1. Select **Review + create** to review the details of your alert rule. If everything looks correct, select **Create** to save your new rule.
- **Aggregation type**: *Maximum*.
+1. Now set up another alert for the *Total number of messages used*. This metric is useful if you want to send an alert when the number of messages used is approaching the daily quota for the IoT hub, at which point the IoT hub will start rejecting messages. Follow the steps you did before, with the following differences.
- **Threshold value**: 4000.
+ * For the signal on the **Configure signal logic** pane, select **Total number of messages used**.
- **Aggregation granularity (Period)**: *1 minute*.
+ * On the **Configure signal logic** pane, set or confirm the following fields (you can ignore the chart):
- **Frequency of evaluation**: *Every 1 Minute*
+ | Parameter | Value |
+ | | -- |
+ | **Threshold** | *Static* |
+ | **Operator** | *Greater than* |
+ | **Aggregation type** | *Total* |
+ | **Threshold value** | *4000* |
+ | **Unit** | *Count* |
+ | **Aggregation granularity (Period)** | *1 minute* |
+ | **Frequency of evaluation** | *Every 1 Minute* |
- These settings set the signal to fire when the number of messages reaches 4000. The metric is evaluated every minute.
+ These settings set the signal to fire when the number of messages reaches 4000. The metric is evaluated every minute.
- * When you specify the action for your alert rule, just select the action group you created previously.
+ * When you specify the action for your alert rule, select same the action group that you created for the previous rule.
- * For the alert details, choose a different name and description than you did previously.
+ * For the alert details, choose a different name and description than you did previously.
-1. Select **Alerts**, under **Monitoring** on the left pane of your IoT hub. Now select **Manage alert rules** on the menu at the top of the **Alerts** pane. The **Rules** pane opens. You should now see your two alerts:
+1. Select **Alerts**, under **Monitoring** on the left pane of your IoT hub. Now select **Alert rules** on the menu at the top of the **Alerts** pane. The **Alert rules** pane opens. You should see your two alerts:
:::image type="content" source="media/tutorial-use-metrics-and-diags/rules-management.png" alt-text="Screenshot showing the Rules pane with the new alert rules.":::
-1. Close the **Rules** pane.
+1. Close the **Alert rules** pane.
With these settings, an alert will trigger and you'll get an email notification when more than 1000 messages are sent within a 5-minute time span and also when the total number of messages used exceeds 4000 (50% of the daily quota for an IoT hub in the free tier).
In the [Set up resources](#set-up-resources) section, you registered a device id
> > Alerts can take up to 10 minutes to be fully configured and enabled by IoT Hub. Wait at least 10 minutes between the time you configure your last alert and running the simulated device app.
-Download the solution for the [IoT Device Simulation](https://github.com/Azure-Samples/azure-iot-samples-csharp/archive/main.zip). This link downloads a repo with several applications in it; the one you are looking for is in iot-hub/Quickstarts/simulated-device/.
+Download or clone the solution for the [Azure IoT C# samples repo](https://github.com/Azure-Samples/azure-iot-samples-csharp) from GitHub. This repo contains several sample applications. For this tutorial, we'll use iot-hub/Quickstarts/simulated-device/.
1. In a local terminal window, navigate to the root folder of the solution. Then navigate to the **iot-hub\Quickstarts\simulated-device** folder. 1. Open the **SimulatedDevice.cs** file in a text editor of your choice.
- 1. Replace the value of the `s_connectionString` variable with the device connection string you noted when you ran the script to set up resources.
+ 1. Replace the value of the `s_connectionString` variable with the device connection string you noted when you ran the script to set up resources.
- 1. In the `SendDeviceToCloudMessagesAsync` method, change the `Task.Delay` from 1000 to 1, which reduces the amount of time between sending messages from 1 second to 0.001 seconds. Shortening this delay increases the number of messages sent. (You will likely not get a message rate of 100 messages per second.)
+ 1. In the `SendDeviceToCloudMessagesAsync` method, change the `Task.Delay` from 1000 to 1, which reduces the amount of time between sending messages from 1 second to 0.001 second. Shortening this delay increases the number of messages sent. (You'll likely not get a message rate of 100 messages per second.)
- ```csharp
- await Task.Delay(1);
- ```
+ ```csharp
+ await Task.Delay(1);
+ ```
- 1. Save your changes to **SimulatedDevice.cs**.
+ 1. Save your changes to **SimulatedDevice.cs**.
1. In the local terminal window, run the following command to install the required packages for the simulated device application:
In this tutorial, you learned how to use IoT Hub metrics and logs by performing
> * View the metrics chart on your dashboard. > * View IoT Hub errors and operations in Azure Monitor Logs.
-Advance to the next tutorial to learn how to manage the state of an IoT device.
+Advance to the next tutorial to learn how test disaster recovery capabilities for IoT Hub.
> [!div class="nextstepaction"]
-> [Configure your devices from a back-end service](tutorial-device-twins.md)
+> [Perform manual failover for an IoT hub](tutorial-manual-failover.md)
key-vault How To Configure Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/how-to-configure-key-rotation.md
# Configure cryptographic key auto-rotation in Azure Key Vault ## Overview
-Automated cryptographic key rotation in [Key Vault](../general/overview.md) allows users to configure Key Vault to automatically generate a new key version at a specified frequency. To configure roation you can use key rotation policy, which can be defined on each individual key.
+Automated cryptographic key rotation in [Key Vault](../general/overview.md) allows users to configure Key Vault to automatically generate a new key version at a specified frequency. To configure rotation you can use key rotation policy, which can be defined on each individual key.
Our recommendation is to rotate encryption keys at least every two years to meet cryptographic best practices.
load-balancer Load Balancer Floating Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-floating-ip.md
sudo ufw allow 80/tcp
## Next steps -- See [Create a public Standard Load Balancer](quickstart-load-balancer-standard-public-portal.md) to get started with using a Load Balancer.
+- Learn about [using multiple frontends](load-balancer-multivip-overview.md) with Azure Load Balancer.
- Learn about [Azure Load Balancer outbound connections](load-balancer-outbound-connections.md).-- Learn more about [Azure Load Balancer](load-balancer-overview.md).-- Learn about [Health Probes](load-balancer-custom-probe-overview.md).-- Learn about [Standard Load Balancer Diagnostics](load-balancer-standard-diagnostics.md).-- Learn more about [Network Security Groups](../virtual-network/network-security-groups-overview.md).
logic-apps Handle Long Running Stored Procedures Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/handle-long-running-stored-procedures-sql-connector.md
Here are the steps to add:
@step_timeout_seconds = 30, @command= N' IF NOT EXISTS(SELECT [jobid] FROM [dbo].[LongRunningState]
- WHERE jobid = $(job_execution_id))
+ WHERE jobid = $(job_execution_id)
THROW 50400, ''Failed to locate call parameters (Step1)'', 1', @credential_name='JobRun', @target_group_name='DatabaseGroupLongRunning'
logic-apps Logic Apps Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-gateway-install.md
This article shows how to download, install, and set up your on-premises data ga
**Minimum requirements**
- * .NET Framework 4.7.2
+ * .NET Framework 4.8
* 64-bit version of Windows 7 or Windows Server 2008 R2 (or later) **Recommended requirements**
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2022-07-21
+
+### Announcing end of support for Python 3.6 in AzureML SDK v1 packages
+++ **Feature deprecation**
+ + **Deprecate Python 3.6 as a supported runtime for SDK v1 packages**
+ + On December 05, 2022, AzureML will deprecate Python 3.6 as a supported runtime, formally ending our Python 3.6 support for SDK v1 packages.
+ + From the deprecation date of December 05, 2022, AzureML will no longer apply security patches and other updates to the Python 3.6 runtime used by AzureML SDK v1 packages.
+ + The existing AzureML SDK v1 packages with Python 3.6 still will continue to run. However, AzureML strongly recommends that you migrate your scripts and dependencies to a supported Python runtime version so that you continue to receive security patches and remain eligible for technical support.
+ + We recommend using Python 3.8 version as a runtime for AzureML SDK v1 packages.
+ + In addition, AzureML SDK v1 packages using Python 3.6 will no longer be eligible for technical support.
+ + If you have any questions, contact us through AML Support.
+ ## 2022-06-27 + **azureml-automl-dnn-nlp**
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
You can also use Azure Data Factory to create a data ingestion pipeline that pre
Learn more by reading and exploring the following resources: ++ [Learning path: End-to-end MLOps with Azure Machine Learning](/learn/paths/build-first-machine-operations-workflow/) + [How and where to deploy models](how-to-deploy-and-where.md) with Machine Learning + [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md) + [End-to-end MLOps examples repo](https://github.com/microsoft/MLOps)
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
When your resource group and repository are no longer needed, clean up the resou
## Next steps > [!div class="nextstepaction"]
+> [Learning path: End-to-end MLOps with Azure Machine Learning](/learn/paths/build-first-machine-operations-workflow/)
> [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md)
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
tags = finished_mlflow_run.data.tags
You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
-Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
+Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar. Click on the job of interest to enter the details view, and then select the **Metrics** tab.
-For the individual Experiment view, select the **All experiments** tab. On the experiment run dashboard, you can see tracked metrics and logs for each run.
+Select the logged metrics to render charts on the right side.
-You can also edit the job list table to select multiple jobs and display either the last, minimum, or maximum logged value for your jobs. Customize your charts to compare the logged metrics values and aggregates across multiple jobs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
+
+For a customizable view of your job metrics (preview), use the preview panel to enable the feature. Once enabled, you can add/remove charts and customize them by applying smoothing, changing the color, or plotting multiple metrics on a single graph. You can also resize and rearrange the layout as you wish. Once you have created your desired view, you can save it for future use and share it with your teammates using a direct link.
+ ### View and download log files for a job
mysql Concepts Troubleshooting Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-troubleshooting-best-practices.md
+
+ Title: Troubleshooting best practices - Azure Database for MySQL
+description: This article describes some recommendations for troubleshooting your Azure Database for MySQL server.
+++++ Last updated : 07/22/2022++
+# Best practices for troubleshooting your Azure Database for MySQL server
++
+Use the sections below to keep your MySQL databases running smoothly and use this information as guiding principles for ensuring that the schemas are designed optimally and provide the best performance for your applications.
+
+## Check the number of indexes
+
+In a busy database environment, you may observe high I/O usage, which can be an indicator of poor data access patterns. Unused indexes can negatively impact performance as they consume disk space and cache, and slow down write operations (INSERT / DELETE / UPDATE). Unused indexes unnecessarily consume additional storage space and increase the backup size.
+
+Before you remove any index, be sure to gather enough information to verify that it's no longer in use. This can help you avoid inadvertently removing an index that is perhaps critical for a query that runs only quarterly or annually. Also, be sure to consider whether an index is used to enforce uniqueness or ordering.
+
+> [!NOTE]
+> Remember to review indexes periodically and perform any necessary updates based on any modifications to the table data.
+
+`SELECT object_schema,
+ object_name,
+ index_name
+FROM performance_schema.table_io_waits_summary_by_index_usage
+WHERE index_name IS NOT NULL
+AND count_star = 0
+ORDER BY object_schema, object_name;`
+
+(or)
+
+`use information_schema;
+select
+tables.table_name,
+statistics.index_name,
+statistics.cardinality,
+tables.table_rows
+from tables
+join statistics
+on (statistics.table_name = tables.table_name
+and statistics.table_schema = '<YOUR DATABASE NAME HERE>'
+and ((tables.table_rows / statistics.cardinality) > 1000));`
+
+## Review the primary key design
+
+Azure Database for MySQL uses the InnoDB storage engine for all non-temporary tables. With InnoDB, data is stored within a clustered index using a B-Tree structure. The table is physically organized based on primary key values, which means that rows are stored in the primary key order.
+Each secondary key entry in an InnoDB table contains a pointer to the primary key value in which the data is stored. In other words, a secondary index entry contains a copy of the primary key value to which the entry is pointing. Therefore, primary key choices have a direct effect on the amount of storage overhead in your tables.
+
+If a key is derived from actual data (e.g., username, email, SSN, etc.), itΓÇÖs called a *natural key*. If a key is artificial and not derived from data (e.g., an auto-incremented integer), itΓÇÖs referred to as a *synthetic key* or *surrogate key*.
+
+ItΓÇÖs generally recommended to avoid using natural primary keys. These keys are often very wide and contain long values from one or multiple columns. This in turn can introduce severe storage overhead with the primary key value being copied into each secondary key entry. Moreover, natural keys donΓÇÖt usually follow a pre-determined order, which dramatically reduces performance and provokes page fragmentation when rows are inserted or updated. To avoid these issues, use monotonically increasing surrogate keys instead of natural keys. An auto-increment (big)integer column is a good example of a monotonically increasing surrogate key. If you require a certain combination of columns, be unique, declare those columns as a unique secondary key.
+
+During the initial stages of building an application, you may not think ahead to imagine a time when your table begins to approach having two billion rows. As a result, you might opt to use a signed 4 byte integer for the data type of an ID (primary key) column. Be sure to check all table primary keys and switch to use 8 byte integer (BIGINT) columns to accommodate the potential for a high volume or growth.
+
+> [!NOTE]
+> For more information about data types and their maximum values, in the MySQL Reference Manual, see [Data Types](https://dev.mysql.com/doc/refman/5.7/en/data-types.html).
+
+## Use covering indexes
+
+The previous section explains how indexes in MySQL are organized as B-Trees and in a clustered index, the leaf nodes contain the data pages of the underlying table. Secondary indexes have the same B-tree structure as clustered indexes, and you can define them on a table or view with a clustered index or a heap. Each index row in the secondary index contains the non-clustered key value and a row locator. This locator points to the data row in the clustered index or heap having the key value. As a result, any lookup involving a secondary index must navigate starting from the root node through the branch nodes to the correct leaf node to take the primary key value. The System then executes a random IO read on the primary key index (once again navigating from the root node through the branch nodes to the correct leaf node) to get the data row.
+
+To avoid this extra random IO read on the primary key index to get the data row, use a covering index, which includes all fields required by the query. Generally, using this approach is beneficial for I/O bound workloads and cached workloads. So as a best practice, use covering indexes because they fit in memory and are smaller and more efficient to read than scanning all the rows.
+
+Consider, for example, a table that you're using to try to find all employees who joined the company after January 1, 2000.
+
+```
+mysql> show create table employee\G
+*************************** 1. row ***************************
+ Table: employee
+Create Table: CREATE TABLE `employee` (
+ `empid` int(11) NOT NULL AUTO_INCREMENT,
+ `fname` varchar(10) DEFAULT NULL,
+ `lname` varchar(10) DEFAULT NULL,
+ `joindate` datetime DEFAULT NULL,
+ `department` varchar(10) DEFAULT NULL,
+ PRIMARY KEY (`empid`)
+ ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1
+1 row in set (0.00 sec)`
+
+`mysql> select empid, fname, lname from employee where joindate > '2000-01-01';
+```
+
+If you run an EXPLAIN plan on this query, youΓÇÖd observe that currently no indexes are being used, and a where clause alone is being used to filter the employee records.
+
+```
+mysql> EXPLAIN select empid, fname, lname from employee where joindate > '2000-01-01'\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: employee
+ partitions: NULL
+ type: ALL
+possible_keys: NULL
+ key: NULL
+ key_len: NULL
+ ref: NULL
+ rows: 3
+ filtered: 33.33
+ Extra: Using where
+1 row in set, 1 warning (0.01 sec)
+```
+
+However, if you were to add an index that covers the column in the where clause, along with the projected columns you would see that the index is being used to locate the columns much more quickly and efficiently.
+
+`mysql> CREATE INDEX cvg_idx_ex ON employee (joindate, empid, fname, lname);`
+
+Now, if you run EXPLAIN plan on the same query, the "Using Index" value appears in the ΓÇ£ExtraΓÇ¥ field, which means that InnoDB executes the query using the index we created earlier, which confirms this as a covering index.
+
+```
+mysql> EXPLAIN select empid, fname, lname from employee where joindate > '2000-01-01'\G
+*************************** 1. row ***************************
+ id: 1
+ select_type: SIMPLE
+ table: employee
+ partitions: NULL
+ type: range
+possible_keys: cvg_idx_ex
+ key: cvg_idx_ex
+ key_len: 6
+ ref: NULL
+ rows: 1
+ filtered: 100.00
+ Extra: Using where; Using index
+1 row in set, 1 warning (0.01 sec)
+```
+
+> [!NOTE]
+> It's important to choose the correct order of the columns in the covering index to serve query correctly. The general rule is to choose the columns for filtering first (WHERE clause), then sorting/grouping (ORDER BY and GROUP BY) and finally the data projection (SELECT).
+
+From the prior example, we've seen that having a covering index for a query provides more efficient record retrieval paths and optimizes performance in a highly concurrent database environment.
+
+## Next steps
+
+To find peer answers to your most important questions, or to post or answer questions, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
mysql How To Troubleshoot Connectivity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-connectivity-issues.md
+
+ Title: Troubleshoot connectivity issues in Azure Database for MySQL
+description: Learn how to troubleshoot connectivity issues in Azure Database for MySQL.
+++++ Last updated : 07/22/2022++
+# Troubleshoot connectivity issues in Azure Database for MySQL
++
+The MySQL Community Edition manages connections using one thread per connection. As a result, each user connection gets a dedicated operating system thread in the mysqld process.
+
+There are potential issues associated with this type of connection handling. For example, memory use is relatively high if there's a large number of user connections, even if they're idle connections. In addition, thereΓÇÖs a higher level of internal server contention and context switching overhead when working with thousands of user connections.
+
+## Diagnosing common connectivity errors
+
+Whenever your instance of Azure Database for MySQL is experiencing connectivity issues, remember that problems can exist in any of the three layers involved: the client device, the network, or your Azure Database for MySQL server.
+
+As a result, whenever youΓÇÖre diagnosing connectivity errors, be sure to consider full details of the:
+
+* Client, including the:
+ * Configuration (on-premises, Azure VM, etc. or a DBA machine).
+ * Operating system.
+ * Software and versions.
+* Connection string and any included parameters.
+* Network topology (same region? same AZ? firewall rules? routing).
+* Connection pool (parameters and configuration), if one is in use.
+
+ItΓÇÖs also important to determine whether the database connectivity issue is affecting a single client device or several client devices. If the errors are affecting only one of several clients, then itΓÇÖs likely that the problem is with that client. However, if all clients are experiencing the same error, itΓÇÖs more likely that the problem is on the database server side or with the networking in between.
+
+Be sure to consider the potential of workload overload as well, especially if an application opens a surge of connections in a very short amount of time. You can use metrics such as ΓÇ£Total ConnectionsΓÇ¥, ΓÇ£Active ConnectionsΓÇ¥, and ΓÇ£Aborted ConnectionsΓÇ¥ to investigate this.
+
+When you establish connectivity from a client device or application, the first important call in mysql is to getaddrinfo, which performs the DNS translation from the endpoint provided to an IP address. If getting the address fails, MySQL shows an error message such as "ERROR 2005 (HY000): Unknown MySQL server host 'mysql-example.mysql.database.azure.com' (11)" and the number in the end (11, 110, etc.).
+
+### Client-side error 2005 codes
+
+Quick reference notes for some client-side error 2005 codes appear in the following table.
+
+| **ERROR 2005 code** | **Notes** |
+|-|-|
+| **(11) "EAI_SYSTEM - system error"** | There's an error on the DNS resolution on the client side. Not an Azure MySQL issue. Use dig/nslookup on the client to troubleshoot. |
+| **(110) "ETIMEDOUT - Connection timed out"** | There was a timeout connecting to the client's DNS server. Not an Azure MySQL issue. Use dig/nslookup on the client to troubleshoot. |
+| **(0) "name unknown"** | The name specified wasn't resolvable by DNS. Check the input on the client. This is very likely not an issue with Azure Database for MySQL. |
+
+The second call in mysql is with socket connectivity and when looking at an error message like "ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql-example.mysql.database.azure.com' (111)", the number in the end (99, 110, 111, 113, etc.).
+
+### Client-side error 2003 codes
+
+Quick reference notes for some client-side error 2003 codes appear in the following table.
+
+| **ERROR 2003 code** | **Notes** |
+|-|-|
+| **(99) "EADDRNOTAVAIL - Cannot assign requested address"** | This error isnΓÇÖt caused by Azure Database for MySQL., rather it is on the client side. |
+| **(110) "ETIMEDOUT - Connection timed out"** | TThere was a timeout connecting to the IP address provided. Likely a security (firewall rules) or networking (routing) issue. Usually, this isnΓÇÖt an issue with Azure Database for MySQL. Use `nc/telnet/TCPtraceroute` on the client device to troubleshoot. |
+| **(111) "ECONNREFUSED - Connection refused"** | While the packets reached the target server, the server rejected the connection. This might be an attempt to connect to the wrong server or the wrong port. This also might relate to the target service (Azure Database for MySQL) being down, recovering from failover, or going through crash recovery, and not yet accepting connections. This issue could be on either the client side or the server side. Use `nc/telnet/TCPtraceroute` on the client device to troubleshoot. |
+| **(113) "EHOSTUNREACH - Host unreachable"** | The client deviceΓÇÖs routing table doesnΓÇÖt include a path to the network on which the database server is located. Check the client device's networking configuration. |
+
+### Other error codes
+
+Quick reference notes for some other error codes related to issues that occur after the network connection with the database server is successfully established appear in the following table.
+
+| **ERROR code** | **Notes** |
+|-|-|
+| **ERROR 2013 "Lost connection to MySQL server"** | The connection was established, but it was lost afterwards. This can happen if a connection is attempted against something that isn't MySQL (like using a MySQL client to connect to SSH on port 22 for example). It can also happen if the super user kills the session. It can also happen if the database times out the session. Or it can refer to issues in the database server, after the connection is established. This can happen at any time during the lifetime of the client connection. It can indicate that the database had a serious issue. |
+| **ERROR 1040 "Too many connections"** | The number of connected database clients is already at the configured maximum number. Need to evaluate why so many connections are established against the database. |
+| **ERROR 1045 "Access denied for user"** | The client provided an incorrect username or password, so the database has denied access. |
+| **ERROR 2006 "MySQL server has gone away"** | Similar to the **ERROR 2013 "Lost connection to MySQL server"** entry in the previous table. |
+| **ERROR 1317 "Query execution was interrupted"** | Error that the client receives when the primary user stops the query, not the connection. |
+| **ERROR 1129 "Host '1.2.3.4' is blocked because of many connection errorsΓÇ¥** | Unblock with 'mysqladmin flush-hosts'" - all clients in a single machine will be blocked if one client of that machine attempts several times to use the wrong protocol to connect with MySQL (telnetting to the MySQL port is one example). As the error message says, the databaseΓÇÖs admin user has to run `FLUSH HOSTS;` to clear the issue. |
+
+> [!NOTE]
+> For more information about connectivity errors, see the blog post [Investigating connection issues with Azure Database for MySQL](https://techcommunity.microsoft.com/t5/azure-database-for-mysql-blog/investigating-connection-issues-with-azure-database-for-mysql/ba-p/2121204).
+
+## Next steps
+
+To find peer answers to your most important questions or to post or answer a question, visit [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-database-mysql).
postgresql Concepts Colocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-colocation.md
Colocation means storing related information together on the same nodes. Queries
In Azure Database for PostgreSQL ΓÇô Hyperscale (Citus), a row is stored in a shard if the hash of the value in the distribution column falls within the shard's hash range. Shards with the same hash range are always placed on the same node. Rows with equal distribution column values are always on the same node across tables. ## A practical example of colocation
application.
Running the queries must consult data in shards scattered across nodes. In this case, the data distribution creates substantial drawbacks:
query can be answered by using the set of colocated shards that contain the data
for that particular tenant. A single PostgreSQL node can answer the query in a single step. In some cases, queries and table schemas must be changed to include the tenant ID in unique constraints and join conditions. This change is usually straightforward.
purview Catalog Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-asset-details.md
Title: How to view, edit, and delete assets
-description: This how to guide describes how you can view and edit asset details.
+ Title: Asset details page in the Microsoft Purview Data Catalog
+description: View relevant information and take action on assets in the data catalog
Previously updated : 02/24/2022 Last updated : 07/25/2022
-# View, edit and delete assets in Microsoft Purview catalog
-
-This article discusses how you can view your assets and their relevant details. It also describes how you can edit and delete assets from your catalog.
+# Asset details page in the Microsoft Purview Data Catalog
+This article discusses how assets are displayed in the Microsoft Purview Data Catalog. It describes how you can view relevant information or take action on assets in your catalog.
## Prerequisites - Set up your data sources and scan the assets into your catalog. - *Or* Use the Microsoft Purview Atlas APIs to ingest assets into the catalog.
-## Viewing asset details
+## Open an asset details page
-You can discover your assets in the Microsoft Purview data catalog by either:
+You can discover your assets in the Microsoft Purview Data Catalog by either:
- [Browsing the data catalog](how-to-browse-catalog.md) - [Searching the data catalog](how-to-search-catalog.md)
-Once you find the asset you are looking for, you can view all of its details, edit, or delete them as described in following sections.
+Once you find the asset you're looking for, you can view all of the asset information or take action on them as described in following sections.
## Asset details tabs explained
Once you find the asset you are looking for, you can view all of its details, ed
- **Overview** - An asset's basic details like description, classification, hierarchy, and glossary terms. - **Properties** - The technical metadata and relationships discovered in the data source. - **Schema** - The schema of the asset including column names, data types, column level classifications, terms, and descriptions are represented in the schema tab.-- **Lineage** - This tab contains lineage graph details for assets where it is available.
+- **Lineage** - This tab contains lineage graph details for assets where it's available.
- **Contacts** - Every asset can have an assigned owner and expert that can be viewed and managed from the contacts tab.-- **Related** - This tab lets you navigate through the technical hierarchy of assets that are related to the current asset you are viewing.
+- **Related** - This tab lets you navigate through the technical hierarchy of assets that are related to the current asset you're viewing.
## Asset overview
-The overview section of the asset details gives you a summarized view of an asset. The sections that follow explains the different parts of the overview page.
+The overview section of the asset details gives a summarized view of an asset. The sections that follow explains the different parts of the overview page.
-### Asset hierarchy
-You can view the full asset hierarchy within the overview tab. As an example: if you navigate to a SQL table, then you can see the schema, database, and the server the table belongs to.
+### Asset description
+An asset description gives a synopsis of what the asset represents. You can add or update an asset description by [editing the asset](#editing-assets).
-### Asset classifications
+#### Adding rich text to a description
-Asset classifications identify the kind of data being represented, and are applied manually or during a scan. For example: a National ID or passport number are supported classifications. (For a full list of classifications, see the [supported classifications page](supported-classifications.md).) The overview tab reflects both asset level classifications and column level classifications that have been applied, which you can also view as part of the schema.
+Microsoft Purview enables users to add rich formatting to asset descriptions such as adding bolding, underlining, or italicizing text. Users can also create tables, bulleted lists, or hyperlinks to external resources.
-### Asset description
+Below are the rich text formatting options:
-You can view the description on an asset in the overview section. You can add an asset description by [editing the asset](#editing-assets)
+| Name | Description | Shortcut key |
+| - | -- | |
+| Bold | Make your text bold. Adding the '*' character around text will also bold it. | Ctrl+B |
+| Italic | Italicize your text. Adding the '_' character around text will also italicize it. | Ctrl+I |
+| Underline | Underline your text. | Ctrl+U |
+| Bullets | Create a bulleted list. Adding the '-' character before text will also create a bulleted list. | |
+| Numbering | Create a numbered list Adding the '1' character before text will also create a bulleted list. | |
+| Heading | Add a formatted heading | |
+| Font size | Change the size of your text. The default size is 12. | |
+| Decrease indent | Move your paragraph closer to the margin. | |
+| Increase indent | Move your paragraph farther away from the margin. | |
+| Add hyperlink | Create a link in your document for quick access to web pages and files. | |
+| Remove hyperlink | Change a link to plain text. | |
+| Quote | Add quote text | |
+| Add table | Add a table to your content. | |
+| Edit table | Insert or delete a column or row from a table | |
+| Clear formatting | Remove all formatting from a selection of text, leaving only the normal, unformatted text. | |
+| Undo | Undo changes you made to the content. | Ctrl+Z |
+| Redo | Redo changes you made to the content. | Ctrl+Y |
+> [!NOTE]
+> Updating a description with the rich text editor updates the `userDescription` field of an entity. If you have already added an asset description before the release of this feature, that description is stored in the `description` field. When overwriting a plain text description with rich text, the entity model will persist both `userDescription` and `description`. The asset details overview page will only show `userDescription`. The `description` field can't be edited in the Microsoft Purview studio user experience.
-### Asset glossary terms
+### Classifications
-Asset glossary terms are a managed vocabulary for business terms that can be used to categorize and relate assets across your environment. For example, terms like 'customer', 'buyer', 'cost center', or any terms that give your data context for your users. For more information, see the [business glossary page](concept-business-glossary.md). You can view the glossary terms for an asset in the overview section, and you can add a glossary term on an asset by [editing the asset](#editing-assets).
+Classifications identify the kind of data being represented by an asset or column such as "ABA routing number", "Email Address", or "U.S. Passport number". These attributes can be assigned during scans or added manually. For a full list of classifications, see the [supported classifications in Microsoft Purview](supported-classifications.md). You can see classifications assigned both to the asset and columns in the schema from the overview page.which you can also view as part of the schema.
+### Glossary terms
-## Editing assets
+Glossary terms are a managed vocabulary for business terms that can be used to categorize and relate assets across your organization. For more information, see the [business glossary page](concept-business-glossary.md). You can view the assigned glossary terms for an asset in the overview section. If you're a data curator on the asset, you can add or remove a glossary term on an asset by [editing the asset](#editing-assets).
-You can edit an asset by selecting the edit icon on the top-left corner of the asset.
+### Collection hierarchy
+In Microsoft Purview, collections organize assets and data sources. They also manage access across the Microsoft Purview governance portal. You can view an assets containing collection under the **Collection path** section.
+
+### Asset hierarchy
+
+You can view the full asset hierarchy within the overview tab. As an example: if you navigate to a SQL table, then you can see the schema, database, and the server the table belongs to.
+
+## Asset actions
+
+Below are a list of actions you can take from an asset details page. Actions available to you vary depending on your permissions and the type of asset you're looking at. Available actions are generally available on the global actions bar.
++
+### Editing assets
+
+If you're a data curator on the collection containing an asset, you can edit an asset by selecting the edit icon on the top-left corner of the asset.
At the asset level you can edit or add a description, classification, or glossary term by staying on the overview tab of the edit screen.
You can navigate to the schema tab on the edit screen to update column name, dat
You can navigate to the contact tab of the edit screen to update owners and experts on the asset. You can search by full name, email or alias of the person within your Azure active directory.
-### Scans on edited assets
+#### Scan behavior after editing assets
-If you edit an asset by adding a description, asset level classification, glossary term, or a contact, later scans will still update the asset schema (new columns and classifications detected by the scanner in subsequent scan runs).
+ Microsoft Purview works to reflect the truth of the source system whenever possible. For example, if you edit a column and later it's deleted from the source table. A scan will remove the column metadata from the asset in Microsoft Purview.
-If you make some column level updates, like adding a description, column level classification, or glossary term, then subsequent scans will also update the asset schema (new columns and classifications will be detected by the scanner in subsequent scan runs).
+Both column-level and asset-level updates such as adding a description, glossary term or classification don't impact scan updates. Scans will update new columns and classifications regardless if these changes are made.
-Even on edited assets, after a scan Microsoft Purview will reflect the truth of the source system. For example: if you edit a column and it's deleted from the source, it will be deleted from your asset in Microsoft Purview.
+If you update the **name** or **data type** of a column, subsequent scans **won't** update the asset schema. New columns and classifications **won't** be detected.
+### Request access to data
->[!NOTE]
-> If you update the **name or data type of a column** in a Microsoft Purview asset, later scans **will not** update the asset schema. New columns and classifications **will not** be detected.
+If a [self-service data access workflow](how-to-workflow-self-service-data-access-hybrid.md) has been created, you can request access to a desired asset directly from the asset details page! To learn more about Microsoft Purview's data policy applications, see [how to enable data use management](how-to-enable-data-use-management.md).
-## Deleting assets
+### Open in Power BI
-You can delete an asset by selecting the delete icon under the name of the asset.
+Microsoft Purview makes it easy to work with useful data you find the data catalog. You can open certain assets in Power BI Desktop from the asset details page. Power BI Desktop integration is supported for the following sources.
-### Delete behavior explained
+- Azure Blob Storage
+- Azure Cosmos DB
+- Azure Data Lake Storage Gen2
+- Azure Dedicated SQL pool (formerly SQL DW)
+- Azure SQL Database
+- Azure SQL Managed Instance
+- Azure Synapse Analytics
+- Azure Database for MySQL
+- Azure Database for PostgreSQL
+- Oracle DB
+- SQL Server
+- Teradata
-Any asset you delete using the delete button is permanently deleted in Microsoft Purview. However, if you run a **full scan** on the source from which the asset was ingested into the catalog, then the asset is reingested and you can discover it using the Microsoft Purview catalog.
+### Deleting assets
-If you have a scheduled scan (weekly or monthly) on the source, the **deleted asset will not get re-ingested** into the catalog unless the asset is modified by an end user since the previous run of the scan. For example, if a SQL table was deleted from Microsoft Purview, but after the table was deleted a user added a new column to the table in SQL, at the next scan the asset will be rescanned and ingested into the catalog.
+If you're a data curator on the collection containing an asset, you can delete an asset by selecting the delete icon under the name of the asset.
+
+Any asset you delete using the delete button is permanently deleted in Microsoft Purview. However, if you run a **full scan** on the source from which the asset was ingested into the catalog, then the asset is reingested and you can discover it using the Microsoft Purview catalog.
-If you delete an asset, only that asset is deleted. Microsoft Purview does not currently support cascaded deletes. For example, if you delete a storage account asset in your catalog - the containers, folders and files within them are not deleted.
+If you have a scheduled scan (weekly or monthly) on the source, the **deleted asset won't get re-ingested** into the catalog unless the asset is modified by an end user since the previous run of the scan. For example, say you manually delete a SQL table from the Microsoft Purview Data Map. Later, a data engineer adds a new column to the source table. When Microsoft Purview scans the database, the table will be reingested into the data map and be discoverable in the data catalog.
+If you delete an asset, only that asset is deleted. Microsoft Purview doesn't currently support cascaded deletes. For example, if you delete a storage account asset in your catalog - the containers, folders and files within them will still exist in the data map and be discoverable in the data catalog.
## Next steps -- [Browse the Microsoft Purview Data catalog](how-to-browse-catalog.md)
+- [Browse the Microsoft Purview Data Catalog](how-to-browse-catalog.md)
- [Search the Microsoft Purview Data Catalog](how-to-search-catalog.md)
purview Concept Best Practices Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-scanning.md
After you register your source in the relevant [collection](./how-to-create-and-
- If a field or column, table, or a file is removed from the source system after the scan was executed, it will only be reflected (removed) in Microsoft Purview after the next scheduled full or incremental scan. - An asset can be deleted from a Microsoft Purview catalog by using the **Delete** icon under the name of the asset. This action won't remove the object in the source. If you run a full scan on the same source, it would get reingested in the catalog. If you've scheduled a weekly or monthly scan instead (incremental), the deleted asset won't be picked unless the object is modified at the source. An example is if a column is added or removed from the table.-- To understand the behavior of subsequent scans after *manually* editing a data asset or an underlying schema through the Microsoft Purview governance portal, see [Catalog asset details](./catalog-asset-details.md#scans-on-edited-assets).
+- To understand the behavior of subsequent scans after *manually* editing a data asset or an underlying schema through the Microsoft Purview governance portal, see [Catalog asset details](./catalog-asset-details.md#editing-assets).
- For more information, see the tutorial on [how to view, edit, and delete assets](./catalog-asset-details.md). ## Next steps
purview How To Managed Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-managed-attributes.md
+
+ Title: Managed attributes in the Microsoft Purview Data Catalog
+description: Apply business context to assets using managed attributes
+++++ Last updated : 07/25/2022++
+# Managed attributes in the Microsoft Purview Data Catalog (preview)
++
+Managed attributes are user-defined attributes that provide a business or organization level context to an asset. When applied, managed attributes enable data consumers using the data catalog to gain context on the role an asset plays in a business.
+
+## Terminology
+
+**Managed attribute:** A set of user-defined attributes that provide a business or organization level context to an asset. A managed attribute has a name and a value. For example, ΓÇ£DepartmentΓÇ¥ is an attribute name and ΓÇ£FinanceΓÇ¥ is its value.
+**Attribute group:** A grouping of managed attributes that allow for easier organization and consumption.
+
+## Create managed attributes in Microsoft Purview Studio
+
+In Microsoft Purview Studio, an organization's managed attributes are managed in the **Annotation management** section of the data map application. Follow the instructions below to create a managed attribute.
+
+1. Open the data map application and navigate to **Managed attributes (preview)** in the **Annotation management** section.
+1. Select **New**. Choose whether you wish to start by creating an attribute group or a managed attribute.
+ :::image type="content" source="media/how-to-managed-attributes/create-new-managed-attribute.png" alt-text="Screenshot that shows how to create a new managed attribute or attribute group.":::
+1. To create an attribute group, enter a name and a description.
+ :::image type="content" source="media/how-to-managed-attributes/create-attribute-group.png" alt-text="Screenshot that shows how to create an attribute group.":::
+1. Managed attributes have a name, attribute group, data type, and associated asset types. Attribute groups can be created in-line during the managed attribute creation process. Associated asset types are the asset types you can apply the attribute to. For example, if you select "Azure SQL Table" for an attribute, you can apply it to Azure SQL Table assets, but not Azure Synapse Dedicated Table assets.
+ :::image type="content" source="media/how-to-managed-attributes/create-managed-attribute.png" alt-text="Screenshot that shows how to create a managed attribute.":::
+1. Select **Create** to save your attribute.
+
+### Expiring managed attributes
+
+In the managed attribute management experience, managed attributes can't be deleted, only expired. Expired asset can't be applied to any assets and are, by default, hidden in the user experience. By default, expired managed attributes aren't removed from an asset. If an asset has an expired managed attribute applied, it can only be removed, not edited.
+
+Both attribute groups and individual managed attributes can be expired. To mark an attribute group or managed attribute as expired, select the **Edit** icon.
++
+Select **Mark as expired** and confirm your change. Once expired, attribute groups and managed attributes can't be reactivated.
++
+## Apply managed attributes to assets in Microsoft Purview Studio
+
+Managed attributes can be applied in the [asset details page](catalog-asset-details.md) in the data catalog. Follow the instructions below to apply a managed attribute.
+
+1. Navigate to an asset by either searching or browsing the data catalog. Open the asset details page.
+1. Select **Edit** on the asset's action bar.
+ :::image type="content" source="media/how-to-managed-attributes/edit-asset.png" alt-text="Screenshot that shows how to edit an asset.":::
+1. In the managed attributes section of the editing experience, select **Add attribute**.
+1. Choose the attribute you wish to apply. Attributes are grouped by their attribute group.
+1. Choose the value or values of the applied attribute.
+1. Continue adding more attributes or select **Save** to apply your changes.
+
+## Create managed attributes using APIs
+
+Managed attributes can be programmatically created and applied using the business metadata APIs in Apache Atlas 2.2. For more information, see the [Use Atlas 2.2 APIs](tutorial-atlas-2-2-apis.md) tutorial.
+
+## Known limitations
+
+Below are the known limitations of the managed attribute feature as it currently exists in Microsoft Purview.
+
+- Managed attributes can only be expired, not deleted.
+- Managed attributes get matched to search keywords, but there's no user-facing filter in the search results. Managed attributes can be filtered using the Search APIs.
+- Managed attributes can't be applied via the bulk edit experience.
+- After creating an attribute group, you can't edit the name of the attribute group.
+- After creating a managed attribute, you can't update the attribute name, attribute group or the field type.
+
+## Next steps
+
+- After creating managed attributes, apply them to assets in the [asset details page](catalog-asset-details.md).
remote-rendering View Remote Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/view-remote-models/view-remote-models.md
public class RemoteRenderingCoordinator : MonoBehaviour
//Implement me }
- public void StopRemoteSession()
+ public async void StopRemoteSession()
{ //Implement me }
public class RemoteRenderingCoordinator : MonoBehaviour
/// <summary> /// Connects the local runtime to the current active session, if there's a session available /// </summary>
- public void ConnectRuntimeToRemoteSession()
+ public async void ConnectRuntimeToRemoteSession()
{ //Implement me }
public async void JoinRemoteSession()
} }
-public void StopRemoteSession()
+public async void StopRemoteSession()
{ if (ARRSessionService.CurrentActiveSession != null) {
- ARRSessionService.CurrentActiveSession.StopAsync();
+ await ARRSessionService.CurrentActiveSession.StopAsync();
} } ```
The application also needs to listen for events about the connection between the
/// <summary> /// Connects the local runtime to the current active session, if there's a session available /// </summary>
-public void ConnectRuntimeToRemoteSession()
+public async void ConnectRuntimeToRemoteSession()
{ if (ARRSessionService == null || ARRSessionService.CurrentActiveSession == null) {
public void ConnectRuntimeToRemoteSession()
return; }
- //Connect the local runtime to the currently connected session
- //This session is set when connecting to a new or existing session
+ // Connect the local runtime to the currently connected session
+ // This session is set when connecting to a new or existing session
ARRSessionService.CurrentActiveSession.ConnectionStatusChanged += OnLocalRuntimeStatusChanged;
- ARRSessionService.CurrentActiveSession.ConnectAsync(new RendererInitOptions());
- CurrentCoordinatorState = RemoteRenderingState.ConnectingToRuntime;
+ await ARRSessionService.CurrentActiveSession.ConnectAsync(new RendererInitOptions());
} public void DisconnectRuntimeFromRemoteSession()
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
Previously updated : 11/19/2021 Last updated : 7/20/2022
Search applications that are built on Azure Cognitive Search can now use Azure Active Directory (Azure AD) and Azure role-based access (Azure RBAC) for authenticated and authorized access. A key advantage of using Azure AD is that your credentials and API keys no longer need to be stored in your code. Azure AD authenticates the security principal (a user, group, or service principal) running the application. If authentication succeeds, Azure AD returns the access token to the application, and the application can then use the access token to authorize requests to Azure Cognitive Search. To learn more about the advantages of using Azure AD in your applications, see [Integrating with Azure Active Directory](../active-directory/develop/active-directory-how-to-integrate.md#benefits-of-integration).
-This article will show you how to configure your application for authentication with the [Microsoft identity platform](../active-directory/develop/v2-overview.md). To learn more about the OAuth 2.0 code grant flow used by Azure AD, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md).
+This article will show you how to configure your application for authentication with the [Microsoft identity platform](../active-directory/develop/v2-overview.md) using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md). To learn more about the OAuth 2.0 code grant flow used by Azure AD, see [Authorize access to Azure Active Directory web applications using the OAuth 2.0 code grant flow](../active-directory/develop/v2-oauth2-auth-code-flow.md).
## Prepare your search service
Once your subscription is added to the preview, you'll still need to enable RBAC
You can also change these settings programatically as described in the [Azure Cognitive Search RBAC Documentation](./search-security-rbac.md?tabs=config-svc-rest%2croles-powershell%2ctest-rest#step-2-preview-configuration).
-## Register an application with Azure AD
+## Create a managed identity
-The next step to using Azure AD for authentication is to register an application with the [Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). If you have problems registering the application, make sure you have the [required permissions](../active-directory/develop/howto-create-service-principal-portal.md#permissions-required-for-registering-an-app).
+The next step to using Azure AD for authentication is to create a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) if you don't have one already. You can also use a different type of service principal object, but this article will focus on managed identities because they eliminate the need to manage credentials.
-1. Sign into your Azure Account in the [Azure portal](https://portal.azure.com).
+To create a manged identity:
-1. Search for **Azure Active Directory**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **App Registrations**.
+1. Search for **Managed Identities**.
-1. Select **+ New Registration**.
+1. Select **+ Create**.
-1. Give your application a name and select a supported account type, which determines who can use the application. Then, select **Register**.
+1. Give your managed identity a name and select a region. Then, select **Create**.
- :::image type="content" source="media/search-howto-aad/register-app.png" alt-text="Screenshot of the register an application wizard" border="true" :::
+ :::image type="content" source="media/search-howto-aad/create-managed-identity.png" alt-text="Screenshot of the create managed identity wizard." border="true" :::
-At this point, you've created your Azure AD application and service principal. Make a note of tenant (or directory) ID and the client (or application) ID on the overview page of your app registration. You'll need those values in a future step.
+## Assign a role to the managed identity
-## Create a client secret
-
-The application will also need a client secret or certificate to prove its identity when requesting a token.
-
-1. Navigate to the app registration you created.
-
-1. Select **Certificates and secrets**.
-
-1. Under **Client secrets**, select **New client secret**.
-
-1. Provide a description of the secret and select the desired expiration interval.
-
- :::image type="content" source="media/search-howto-aad/create-secret.png" alt-text="Screenshot of create a client secret wizard" border="true" :::
-
-Make sure to save the value of the secret in a secure location as you won't be able to access the value again.
-
-## Assign a role to the application
-
-Next, you need to grant your Azure AD application access to your search service. Azure Cognitive Search has various [built-in roles](search-security-rbac.md#built-in-roles-used-in-search). You can also create a [custom role](search-security-rbac.md#create-a-custom-role).
+Next, you need to grant your managed identity access to your search service. Azure Cognitive Search has various [built-in roles](search-security-rbac.md#built-in-roles-used-in-search). You can also create a [custom role](search-security-rbac.md#create-a-custom-role).
In general, it's best to give your application only the access required. For example, if your application only needs to be able to query the search index, you could grant it the [Search Index Data Reader (preview)](../role-based-access-control/built-in-roles.md#search-index-data-reader) role. Alternatively, if it needs to be able to read and write to a search index, you could use the [Search Index Data Contributor (preview)](../role-based-access-control/built-in-roles.md#search-index-data-contributor) role.
In general, it's best to give your application only the access required. For exa
1. Navigate to your search service.
-1. Select **Access Control (IAM)** in the left navigation pane.
+1. Select **Access control (IAM)** in the left navigation pane.
1. Select **+ Add** > **Add role assignment**.
In general, it's best to give your application only the access required. For exa
+ Contributor + Reader + Search Service Contributor
- + Search Index Data Contributor (preview)
+ + Search Index Data Contributor (preview)
+ Search Index Data Reader (preview)
-1. On the **Members** tab, select the Azure AD user or group identity under which your application runs.
+ For more information on the available roles, see [Built-in roles used in Search](search-security-rbac.md#built-in-roles-used-in-search).
+
+ > [!NOTE]
+ > The Owner, Contributor, Reader, and Search Service Contributor roles don't give you access to the data within a search index so you can't query a search index or index data. To get access to the data within a search index, you need either the Search Index Data Contributor or Search Index Data Reader role.
+
+1. On the **Members** tab, select the managed identity that you want to give access to your search service.
1. On the **Review + assign** tab, select **Review + assign** to assign the role.
+You may want to give your managed identity multiple roles such as Search Service Contributor and Search Index Data Contributor if your application needs to both create indexes and query them.
+ You can also [assign roles using PowerShell](./search-security-rbac.md?tabs=config-svc-rest%2croles-powershell%2ctest-rest#step-3-assign-roles). ## Set up Azure AD authentication in your client
-Once you have an Azure AD application created and you've granted it permissions to access your search service, you're ready you can add code to your application to authenticate a security principal and acquire an OAuth 2.0 token.
+Once you have a managed identity created and you've granted it permissions to access your search service, you're ready to add code to your application to authenticate the security principal and acquire an OAuth 2.0 token.
Azure AD authentication is also supported in the preview SDKs for [Java](https://search.maven.org/artifact/com.azure/azure-search-documents/11.5.0-beta.3/jar), [Python](https://pypi.org/project/azure-search-documents/11.3.0b3/), and [JavaScript](https://www.npmjs.com/package/@azure/search-documents/v/11.3.0-beta.3).
The following instructions reference an existing C# sample to demonstrate the co
1. As a starting point, clone the [source code](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v11) for the [C# quickstart](search-get-started-dotnet.md).
- The sample currently uses key-based authentication to create the `SearchClient` and `SearchIndexClient` but you can make a small change to switch over to role-based authentication. Instead of using `AzureKeyCredential` in the beginning of `Main()` in [Program.cs](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/quickstart/v11/AzureSearchQuickstart-v11/Program.cs), use this:
+ The sample currently uses key-based authentication and the `AzureKeyCredential` to create the `SearchClient` and `SearchIndexClient` but you can make a small change to switch over to role-based authentication.
+
+1. Next, import the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity/) library to get access to other authentication techniques.
- ```csharp
- AzureKeyCredential credential = new AzureKeyCredential(apiKey);
-
+1. Instead of using `AzureKeyCredential` in the beginning of `Main()` in [Program.cs](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/quickstart/v11/AzureSearchQuickstart-v11/Program.cs), use `DefaultAzureCredential` like in the code snippet below:
+
+ ```csharp
// Create a SearchIndexClient to send create/delete index commands
- SearchIndexClient adminClient = new SearchIndexClient(serviceEndpoint, credential);
+ SearchIndexClient adminClient = new SearchIndexClient(serviceEndpoint, new DefaultAzureCredential());
// Create a SearchClient to load and query documents
- SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, credential);
+ SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, new DefaultAzureCredential());
```
-1. Use `ClientSecretCredential` to authenticate with the search service. First, import the [Azure.Identity](https://www.nuget.org/packages/Azure.Identity/) library to use `ClientSecretCredential`.
-
-1. You'll need to provide the following strings:
-
- + The tenant (or directory) ID. This can be retrieved from the overview page of your app registration.
- + The client (or application) ID. This can be retrieved from the overview page of your app registration.
- + The value of the client secret that you copied in a preview step.
-
- ```csharp
- var tokenCredential = new ClientSecretCredential(aadTenantId, aadClientId, aadSecret);
- SearchIndexClient adminClient = new SearchIndexClient(serviceEndpoint, tokenCredential);
- ```
+> [!NOTE]
+> User-assigned managed identities work only in Azure environments. If you run this code locally, `DefaultAzureCredential` will fall back to authenticating with your credentials. Make sure you've also given yourself the required access to the search service if you plan to run the code locally.
-The Azure.Identity documentation also has additional details on using [Azure AD authentication with the Azure SDK for .NET](/dotnet/api/overview/azure/identity-readme).
+The Azure.Identity documentation also has more details on using [Azure AD authentication with the Azure SDK for .NET](/dotnet/api/overview/azure/identity-readme), which gives more details on how `DefaultAzureCredential` works as well as other authentication techniques available. `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types.
### [**REST API**](#tab/aad-rest)
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Previously updated : 07/01/2022 Last updated : 07/22/2022 # What is Azure Cognitive Search? Azure Cognitive Search ([formerly known as "Azure Search"](whats-new.md#new-service-name)) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
-Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, online retail, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:
+Search is foundational to any app that surfaces text to users, where common scenarios include catalog or document search, online retail apps, or data exploration over proprietary content. When you create a search service, you'll work with the following capabilities:
-+ A search engine for full text search over a search index containing your user-owned content
-+ Rich indexing, with [text analysis](search-analyzers.md) and [optional AI enrichment](cognitive-search-concept-intro.md) for content extraction and transformation
-+ Rich query syntax that supplements free text search with filters, autocomplete, regex, geo-search and more
-+ Programmability through REST APIs and client libraries in Azure SDKs for .NET, Python, Java, and JavaScript
++ A search engine for full text search over a search index containing user-owned content++ Rich indexing, with [lexical analysis](search-analyzers.md) and [optional AI enrichment](cognitive-search-concept-intro.md) for content extraction and transformation++ Rich query syntax for text search, fuzzy search, autocomplete, geo-search and more++ Programmability through REST APIs and client libraries in Azure SDKs + Azure integration at the data layer, machine learning layer, and AI (Cognitive Services)
+> [!div class="nextstepaction"]
+> [Create a search service](search-create-service-portal.md)
+ Architecturally, a search service sits between the external data stores that contain your un-indexed data, and your client app that sends query requests to a search index and handles the response. ![Azure Cognitive Search architecture](media/search-what-is-azure-search/azure-search-diagram.svg "Azure Cognitive Search architecture")
+In your client app, the search experience is defined using APIs from Azure Cognitive Search, and can include relevance tuning, semantic ranking, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
+ Across the Azure platform, Cognitive Search can integrate with other Azure services in the form of *indexers* that automate data ingestion/retrieval from Azure data sources, and *skillsets* that incorporate consumable AI from Cognitive Services, such as image and natural language processing, or custom AI that you create in Azure Machine Learning or wrap inside Azure Functions. ## Inside a search service On the search service itself, the two primary workloads are *indexing* and *querying*.
-+ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into to your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes for fast scans. You can upload any text that is in the form of JSON documents.
-
- Additionally, if your content includes mixed files, you have the option of adding *AI enrichment* through [cognitive skills](cognitive-search-working-with-skillsets.md). AI enrichment can extract text embedded in application files, and also infer text and structure from non-text files by analyzing the content.
++ [**Indexing**](search-what-is-an-index.md) is an intake process that loads content into to your search service and makes it searchable. Internally, inbound text is processed into tokens and stored in inverted indexes for fast scans. You can upload JSON documents, or use an indexer to serialize your data into JSON.
- The skills providing the analysis are predefined ones from Microsoft, or custom skills that you create. The subsequent analysis and transformations can result in new information and structures that didn't previously exist, providing high utility for many search and knowledge mining scenarios.
+ Additionally, if your content includes mixed file types, you have the option of adding *AI enrichment* through [cognitive skills](cognitive-search-working-with-skillsets.md). AI enrichment can extract text embedded in application files, and also infer text and structure from non-text files by analyzing the content.
-+ [**Querying**](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you create, own, and store in your service. In your client app, the search experience is defined using APIs from Azure Cognitive Search, and can include relevance tuning, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
-
-Functionality is exposed through simple [REST APIs](/rest/api/searchservice/), or Azure SDKs like the [Azure SDK for .NET](search-howto-dotnet-sdk.md), that mask the inherent complexity of information retrieval. You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets. Because the service runs in the cloud, infrastructure and availability are managed by Microsoft.
++ [**Querying**](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you control. ## Why use Cognitive Search?
Azure Cognitive Search is well suited for the following application scenarios:
+ Easily implement search-related features: relevance tuning, faceted navigation, filters (including geo-spatial search), synonym mapping, and autocomplete.
-+ Transform large undifferentiated text or image files, or application files stored in Azure Blob Storage or Cosmos DB, into searchable JSON documents. This is achieved during indexing through [cognitive skills](cognitive-search-concept-intro.md) that add external processing.
++ Transform large undifferentiated text or image files, or application files stored in Azure Blob Storage or Cosmos DB, into searchable chunks. This is achieved during indexing through [cognitive skills](cognitive-search-concept-intro.md) that add external processing. + Add linguistic or custom text analysis. If you have non-English content, Azure Cognitive Search supports both Lucene analyzers and Microsoft's natural language processors. You can also configure analyzers to achieve specialized processing of raw content, such as filtering out diacritics, or recognizing and preserving patterns in strings.
For more information about specific functionality, see [Features of Azure Cognit
## How to get started
+Functionality is exposed through simple [REST APIs](/rest/api/searchservice/), or Azure SDKs like the [Azure SDK for .NET](search-howto-dotnet-sdk.md).
+
+You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets.
+ An end-to-end exploration of core search features can be accomplished in four steps: 1. [**Decide on a tier**](search-sku-tier.md) and region. One free search service is allowed per subscription. All quickstarts can be completed on the free tier. For more capacity and capabilities, you'll need a [billable tier](https://azure.microsoft.com/pricing/details/search/).
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
# Azure Certificate Authority details
-This article provides the details of the root and subordinate Certificate Authorities (CAs) utilized by Azure. The minimum requirements for public key encryption and signature algorithms as well as links to certificate downloads and revocation lists are provided below the CA details tables.
+This article provides the details of the root and subordinate Certificate Authorities (CAs) utilized by Azure. The scope includes government and national clouds. The minimum requirements for public key encryption and signature algorithms as well as links to certificate downloads and revocation lists are provided below the CA details tables.
Looking for CA details specific to Azure Active Directory? See the [Certificate authorities used by Azure Active Directory](../../active-directory/fundamentals/certificate-authorities.md) article.
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
The next section on writing rules explains how to use KQL in the specific contex
#### Below is the recommended journey for learning Sentinel KQL: * [Pluralsight KQL course](https://www.pluralsight.com/courses/kusto-query-language-kql-from-scratch) - the basics
+* [Must Learn KQL](https://aka.ms/MustLearnKQL) - A 20-part KQL series that walks through the basics to creating your first Analytics Rule. Includes an assessment and certificate.
* The Microsoft Sentinel KQL Lab: An interactive lab teaching KQL focusing on what you need for Microsoft Sentinel: * [Learning module (SC-200 part 4)](/learn/paths/sc-200-utilize-kql-for-azure-sentinel/) * [Presentation](https://onedrive.live.com/?authkey=%21AJRxX475AhXGQBE&cid=66C31D2DBF8E0F71&id=66C31D2DBF8E0F71%21740&parId=66C31D2DBF8E0F71%21446&o=OneUp), [Lab URL](https://aka.ms/lademo)
service-fabric Cluster Resource Manager Subclustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/cluster-resource-manager-subclustering.md
For example, if we have a node property called NodeColor and we have three nodes
* Node 1: NodeColor=Red * Node 2: NodeColor=Blue
-* Node 2: NodeColor=Green
+* Node 3: NodeColor=Green
And we have two
service-fabric How To Managed Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-configuration.md
In addition to selecting the [Service Fabric managed cluster SKU](overview-manag
* Configure [placement properties](how-to-managed-cluster-modify-node-type.md#configure-placement-properties-for-a-node-type) for a node type * Selecting the cluster [managed disk type](how-to-managed-cluster-managed-disk.md) SKU * Configuring cluster [upgrade options](how-to-managed-cluster-upgrades.md) for the runtime updates
+* Configure [Dedicated Hosts](how-to-managed-cluster-dedicated-hosts.md) with managed cluster
+* Use [Ephemeral OS disks](how-to-managed-cluster-ephemeral-os-disks.md) for node types in managed cluster
## Next steps
service-fabric How To Managed Cluster Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-dedicated-hosts.md
+
+ Title: Add Azure Dedicated Host to a Service Fabric managed cluster (SFMC)
+description: Learn how to add Azure Dedicated Host to a Service Fabric managed cluster (SFMC)
+ Last updated : 7/14/2022++
+# Introduction to Dedicated Hosts on Service Fabric managed clusters (Preview)
+[Azure Dedicated Host](../virtual-machines/dedicated-hosts.md) is a service that provides physical servers - able to host one or more virtual machines - dedicated to one Azure subscription. The server is dedicated to your organization and workloads and capacity isn't shared with anyone else. Dedicated hosts are the same physical servers used in our data centers, provided as a resource. You can provision dedicated hosts within a region, availability zone, and fault domain. Then, you can place VMs directly into your provisioned hosts, in whatever configuration best meets your needs.
+
+Using Azure Dedicated Hosts for nodes with your Service Fabric managed cluster (SFMC) has the following benefits:
+
+* Host-level hardware isolation at the physical server level. No other VMs will be placed on your hosts. Dedicated hosts are deployed in the same data centers and share the same network and underlying storage infrastructure as other, non-isolated hosts.
+* Control over maintenance events initiated by the Azure platform. While most maintenance events have little to no impact on virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt into a maintenance window to reduce the impact on service.
+
+You can choose the SKU for Dedicated Hosts Virtual Machines based on your workload requirements. For more information, see [Dedicated Host Virtual Machines](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/).
+
+The following guide will take you step by step for how to add an Azure Dedicated Host to a Service Fabric managed cluster with an Azure Resource Manager template.
++
+## Prerequisites
+This guide builds upon the managed cluster quick start guide: [Deploy a Service Fabric managed cluster using Azure Resource Manager](quickstart-managed-cluster-template.md)
+
+Before you begin:
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
+* Retrieve a managed cluster ARM template. Sample Resource Manager templates are available in the [Azure samples on GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). These templates can be used as a starting point for your cluster template. This guide shows how to deploy a Standard SKU cluster with two node types and 12 nodes.
+* The user needs to have Microsoft.Authorization/roleAssignments/write permissions to the host group such as [User Access Administrator](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#owner) to do role assignments in a host group. For more information, see [Assign Azure roles using the Azure portal - Azure RBAC](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-portal?tabs=current#prerequisites).
++
+## Review the template
+The template used in this guide is from [Azure Samples - Service Fabric cluster templates](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH).
+
+## Create a client certificate
+Service Fabric managed clusters use a client certificate as a key for access control. If you already have a client certificate that you would like to use for access control to your cluster, you can skip this step.
+
+If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal). Note the certificate thumbprint as it will be required to deploy the template in the next step.
+
+## Deploy Dedicated Host resources and configure access to Service Fabric Resource Provider
+
+Create a dedicated host group and add a role assignment to the host group with the Service Fabric Resource Provider application using the steps below. This role assignment allows Service Fabric Resource Provider to deploy VMs on the Dedicated Hosts inside the host group to the managed cluster's virtual machine scale set. This assignment is a one-time action.
+
+1. Get SFRP provider ID and Service Principal for Service Fabric Resource Provider application.
+
+ ```powershell
+ Login-AzAccount
+ Select-AzSubscription -SubscriptionId <SubId>
+ Get-AzADServicePrincipal -DisplayName "Azure Service Fabric Resource Provider"
+ ```
+
+ >[!NOTE]
+ > Make sure you are in the correct subscription, the principal ID will change if the subscription is in a different tenant.
++
+2. Create a dedicated host group pinned to one availability zone and five fault domains using the provided [sample ARM deployment template for Dedicated Host Group](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH). The sample will ensure there is at least one dedicated host per fault domain.
+ ```powershell
+ New-AzResourceGroup -Name $ResourceGroupName -Location $location
+ New-AzResourceGroupDeployment -Name "hostgroup-deployment" -ResourceGroupName $ResourceGroupName -TemplateFile ".\HostGroup-And-RoleAssignment.json" -TemplateParameterFile ".\HostGroup-And-RoleAssignment.parameters.json" -Debug -Verbose
+ ```
+
+ >[!NOTE]
+ > * Ensure you choose the correct SKU family for the Dedicated Host that matches the one you are going to use for the underlying node type VM SKU. For more information, see [Dedicated Host Virtual Machines](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/).
+ > * Each fault domain needs a dedicated host to be placed in it and Service Fabric managed clusters require five fault domains. Therefore, at least five dedicated hosts should be present in each dedicated host group.
++
+3. The [sample ARM deployment template for Dedicated Host Group](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH) used in the previous step also adds a role assignment to the host group with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](https://docs.microsoft.com/azure/role-based-access-control/built-in-roles#all). This role assignment is defined in the resources section of template with Principal ID determined from the first step and a role definition ID.
+
+ ```JSON
+ "variables": {
+ "authorizationApiVersion": "2018-01-01-preview",
+ "contributorRoleId": "b24988ac-6180-42a0-ab88-20f7382dd24c",
+ "SFRPAadServicePrincipalId": " <Service Fabric Resource Provider ID> -"
+ },
+ "resources": [
+ {
+ "apiVersion": "[variables('authorizationApiVersion')]",
+ "type": "Microsoft.Compute/Hostgroups/providers/roleAssignments",
+ "name": "[concat(concat(parameters('dhgNamePrefix'), '0'), '/Microsoft.Authorization/', parameters('hostGroupRoleAssignmentId'))]",
+ "dependsOn": [
+ "[resourceId('Microsoft.Compute/hostGroups', concat(parameters('dhgNamePrefix'), '0'))]"
+ ],
+ "properties": {
+ "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/', variables('contributorRoleId'))]",
+ "principalId": "[variables('SFRPAadServicePrincipalId')]"
+ }
+ }
+ ]
+ ```
+
+ or you can also add role assignment via PowerShell using Principal ID determined from the first step and role definition name as "Contributor" where applicable.
+
+ ```powershell
+ New-AzRoleAssignment -PrincipalId "<Service Fabric Resource Provider ID>" -RoleDefinitionName "Contributor" -Scope "<Host Group Id>"
+ ```
++
+## Deploy Service Fabric managed cluster
+
+Create an Azure Service Fabric managed cluster with node type(s) configured to reference the Dedicated Host group ResourceId. The node type needs to be pinned to the same availability zone as the host group.
+1. Pick the template from [Service Fabric cluster sample template for Dedicated Host](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH), which includes specification for Dedicated Host support.
++
+2. Provide your own values for the following template parameters:
+
+ * Subscription: Select the same Azure subscription as the host group subscription.
+ * Resource Group: Select Create new. Enter a unique name for the resource group, such as myResourceGroup, then choose OK.
+ * Location: Select the same location as the host group location.
+ * Cluster Name: Enter a unique name for your cluster, such as mysfcluster.
+ * Admin Username: Enter a name for the admin to be used for RDP on the underlying VMs in the cluster.
+ * Admin Password: Enter a password for the admin to be used for RDP on the underlying VMs in the cluster.
+ * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal) to create a self-signed certificate.
+ * Node Type Name: Enter a unique name for your node type, such as nt1.
+
+3. Deploy an ARM template through one of the methods below:
+
+ * ARM portal custom template experience: [Custom deployment - Microsoft Azure](https://ms.portal.azure.com/#create/Microsoft.Template). Select the following image to sign in to Azure, and provide your own values for the template parameters, then deploy the template.
+
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fservice-fabric-cluster-templates%2Fmaster%2FSF-Managed-Standard-SKU-2-NT-ADH%2Fazuredeploy.json)
+
+ * ARM PowerShell cmdlets: [New-AzResourceGroupDeployment (Az.Resources)](https://docs.microsoft.com/powershell/module/az.resources/new-azresourcegroupdeployment). Store the paths of your ARM template and parameter files in variables, then deploy the template.
+
+ ```powershell
+ $templateFilePath = "<full path to azuredeploy.json>"
+ $parameterFilePath = "<full path to azuredeploy.parameters.json>"
+ $pass = (ConvertTo-SecureString -AsPlainText -Force "<adminPassword>")
+
+ New-AzResourceGroupDeployment `
+ -Name $DeploymentName `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile $templateFilePath `
+ -TemplateParameterFile $parameterFilePath `
+ -adminPassword $pass `
+ -Debug -Verbose
+ ```
+
+ Wait for the deployment to be completed successfully.
+
+
+## Troubleshooting
+
+1. The following error is thrown when SFRP doesn't have access to the host group. Review the role assignment steps above and ensure the assignment is done correctly.
+ ```
+ {
+ "code": "LinkedAuthorizationFailed",
+ "message": "The client '[<clientId>]' with object id '[<objectId>]' has permission to perform action 'Microsoft.Compute/virtualMachineScaleSets/write' on scope '/subscriptions/[<Subs-Id>]/resourcegroups/[<ResGrp-Id>]/providers/Microsoft.Compute/virtualMachineScaleSets/pnt'; however, it does not have permission to perform action 'write' on the linked scope(s) '/subscriptions/[<Subs-Id>]/resourceGroups/[<ResGrp-Id>]/providers/Microsoft.Compute/hostGroups/HostGroupscu0' or the linked scope(s) are invalid."
+ }
+ ```
+2. If host group is in a different subscription than the clusters, then the following error is reported. Ensure they both are in the same subscription.
+ ```
+ {
+ "code": "BadRequest",
+ "message": "Entity subscriptionId in resource reference id /subscriptions/[<Subs-Id>]/resourceGroups/[<ResGrp-Id>]/providers/Microsoft.Compute/hostGroups/[<HostGroup>] is invalid."
+ }
+ ```
+3. If Quota for Host Group isn't sufficient, following error is thrown:
+ ```
+ {
+ "code": "QuotaExceeded",
+ "message": "Operation could not be completed as it results in exceeding approved standardDSv3Family Cores quota.
+ Additional Required: 320, (Minimum) New Limit Required: 320. Submit a request for Quota increase [here](https://aka.ms/ProdportalCRP/#blade/Microsoft_Azure_Capacity/UsageAndQuota.ReactView/Parameters/). Please read more about quota limits [here](https://docs.microsoft.com/azure/azure-supportability/per-vm-quota-requests)ΓÇ¥
+ }
+ ```
+## Next steps
+> [!div class="nextstepaction"]
+> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
service-fabric How To Managed Cluster Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-ephemeral-os-disks.md
+
+ Title: Create a Service Fabric managed cluster (SFMC) with Ephemeral OS disks for node types
+description: Learn how to create a Service Fabric managed cluster (SFMC) with Ephemeral OS disks for node types
+ Last updated : 7/14/2022++
+# Introduction to Service Fabric managed cluster with Ephemeral OS disks for node types (Preview)
+Azure Service Fabric managed clusters by default use managed OS disks for the nodes in a given node type. To be more cost efficient, managed clusters provide the ability to configure Ephemeral OS disks. Ephemeral OS disks are created on the local virtual machine (VM) storage and not saved to the remote Azure Storage. Ephemeral OS disks are free and replace the need to use managed OS disks.
+
+The key benefits of ephemeral OS disks are:
+
+* Lower read/write latency, like a temporary disk along with faster node scaling and cluster upgrades.
+* Supported by Marketplace, custom images, and by [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md) (formerly known as Shared Image Gallery).
+* Ability to fast reset or reimage VMs and scale set instances to the original boot state.
+* Available in all Azure regions.
+
+Ephemeral OS disks work well where applications are tolerant of individual VM failures but are more affected by VM deployment time or reimaging of individual VM instances. They don't provide data backup / restore guarantee as managed OS disks do.
+
+This article describes how to create a Service Fabric managed cluster node types specifically with Ephemeral OS disks using an Azure Resource Manager template (ARM template).
+
+## Prerequisites
+This guide builds upon the managed cluster quick start guide: [Deploy a Service Fabric managed cluster using Azure Resource Manager](https://docs.microsoft.com/azure/service-fabric/quickstart-managed-cluster-template)
+
+Before you begin:
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free)
+* Retrieve a managed cluster ARM template. Sample Resource Manager templates are available in the [Azure samples on GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). These templates can be used as a starting point for your cluster template.
+* Ephemeral OS disks are supported both for primary and secondary node type. This guide shows how to deploy a Standard SKU cluster with two node types - a primary and a secondary node type, which uses Ephemeral OS disk.
+* Ephemeral OS disks aren't supported for every SKU. VM sizes such as DSv1, DSv2, DSv3, Esv3, Fs, FsV2, GS, M, Mdsv2, Bs, Dav4, Eav4 supports Ephemeral OS disks. Ensure the SKU with which you want to deploy supports Ephemeral OS disk. For more information on individual SKU, see [supported VM SKU](https://docs.microsoft.com/azure/virtual-machines/dv3-dsv3-series) and navigate to the desired SKU on left side pane.
+* Ephemeral OS disks in Service Fabric are placed in the space for temporary disks for the VM SKU. Ensure the VM SKU you're using has more than 127 GiB of temporary disk space to place Ephemeral OS disk.
+
+## Review the template
+The template used in this guide is from [Azure Samples - Service Fabric cluster templates](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-Ephemeral).
++
+## Create a client certificate
+Service Fabric managed clusters use a client certificate as a key for access control. If you already have a client certificate that you would like to use for access control to your cluster, you can skip this step.
+
+If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal). Note the certificate thumbprint as it will be required to deploy the template in the next step.
+
+## Deploy the template
+
+1. Pick the template from [Service Fabric cluster sample template for Ephemeral OS disk](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-Ephemeral), which includes specification for Ephemeral OS disks support.
+
+2. Provide your own values for the following template parameters:
+
+ * Subscription: Select an Azure subscription.
+ * Resource Group: Select Create new. Enter a unique name for the resource group, such as myResourceGroup, then choose OK.
+ * Location: Select a location.
+ * Cluster Name: Enter a unique name for your cluster, such as mysfcluster.
+ * Admin Username: Enter a name for the admin to be used for RDP on the underlying VMs in the cluster.
+ * Admin Password: Enter a password for the admin to be used for RDP on the underlying VMs in the cluster.
+ * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](https://docs.microsoft.com/azure/key-vault/certificates/quick-create-portal) to create a self-signed certificate.
+ * Node Type Name: Enter a unique name for your node type, such as nt1.
++
+3. Deploy an ARM template through one of the methods below:
+
+ * ARM portal custom template experience: [Custom deployment - Microsoft Azure](https://ms.portal.azure.com/#create/Microsoft.Template). Select the following image to sign in to Azure, and provide your own values for the template parameters, then deploy the template.
+
+ [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fservice-fabric-cluster-templates%2Fmaster%2FSF-Managed-Standard-SKU-2-NT-Ephemeral%2Fazuredeploy.json)
++
+ * ARM PowerShell cmdlets: [New-AzResourceGroupDeployment (Az.Resources)](https://docs.microsoft.com/powershell/module/az.resources/new-azresourcegroupdeployment). Store the paths of your ARM template and parameter files in variables, then deploy the template.
+
+ ```powershell
+ $templateFilePath = "<full path to azuredeploy.json>"
+ $parameterFilePath = "<full path to azuredeploy.parameters.json>"
+
+ New-AzResourceGroupDeployment `
+ -Name $DeploymentName `
+ -ResourceGroupName $resourceGroupName `
+ -TemplateFile $templateFilePath `
+ -TemplateParameterFile $parameterFilePath `
+ -Debug -Verbose
+ ```
+ Wait for the deployment to be completed successfully.
+
+4. To configure a node type to use Ephemeral OS disks through your own template:
+ * Use Service Fabric API version 2022-06-01-preview and above
+ * Edit the template, azuredeploy.json, and add the following properties under the node type section:
+ ```JSON
+ "properties": {
+ "useEphemeralOSDisk": true
+ }
+ ```
+ Sample template is available that includes these specifications: [Azure-Sample - Service Fabric cluster template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-Ephemeral).
++
+## Migrate to using Ephemeral OS disks for Service Fabric managed cluster node types
+A node type can only be configured to use Ephemeral OS disk at the time of creation. Existing node types can't be converted to use Ephemeral OS disks. For all migration scenarios, add a new node type with Ephemeral OS disk to the cluster and migrate your services to that node type.
+
+1. Add a new node type that's configured to use Ephemeral OS disk as specified earlier.
+2. Migrate any required workloads to the new node type.
+3. Disable and remove the old node type from the cluster.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
service-fabric Service Fabric Cluster Node Auto Repair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-node-auto-repair.md
+
+ Title: Service Fabric managed cluster automatic node repair
+description: Learn how Azure Service Fabric managed cluster performs automatic node repair if they go down.
+ Last updated : 07/18/2022++
+# Azure Service Fabric managed cluster (SFMC) node auto-repair
+
+Service Fabric managed cluster (SFMC) has added a capability to help keep a cluster healthy automatically via node auto-repair, further reducing operational management required. This new capability will detect when nodes are down in a cluster and attempt to repair them without customer intervention. In this document, you'll learn how automatic node repair works for Service Fabric managed cluster nodes.
+
+## How SFMC checks when nodes are down
+
+Service Fabric managed cluster continuously monitors the health of nodes and records the time when a node goes up and down. If a node is detected to be down for a pre-defined period, SFMC initiates automatic repair actions on the node. This pre-defined period is currently configured to be 24 hours at launch and can be optimized in future.
+
+## How automatic repair works
+
+SFMC performs the following repair actions on the underlying Virtual Machine (VM) if Service Fabric node is detected down for 24 hours:
+
+1) Reboot the underlying VM for the node.
+2) If reboot doesn't bring up the node, redeploy the node.
+3) If redeploy is unsuccessful to bring up the node, deallocate and start the VM back.
+4) If the deallocation doesn't bring up the node, reimage the node.
+
+SFMC waits for nodes to come back up after each action, and if a node doesn't come up, SFMC proceeds to the next action. Node auto-repair actions typically take approximately 30 minutes once started, but can take upwards of three hours to iterate through and complete the full set of actions described. No further retries are made if the node is still down after SFMC has tried all the repair actions above. Alternative remediations will be investigated by SF engineers if auto-repair doesn't bring the node up.
+
+If SFMC finds multiple nodes to be down during a health check, each node gets repaired individually before another repair begins. SFMC attempts to repair nodes in the same order that they're detected down.
+
+While node auto-repair covers the above scenario described, customers should continue to monitor the health of their cluster and its resources. The goal of this feature is to take off some of the burden of cluster management and operations.
+
+## Future Roadmap
+
+This launch is the first iteration of node auto-repair capability, and SFMC will continue to improve and expand the scope in future.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
[Update rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) provides the following updates:
+> [!Note]
+> - The 9.49 version has not been released for VMware replications to Azure preview experience.
+ **Update** | **Details** | **Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
site-recovery Vmware Azure Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-reprotect.md
After [failover](site-recovery-failover.md) of on-premises VMware VMs or physica
4. If you're reprotecting VMs gathered into a replication group for multi-VM consistency, make sure they all have the same operating system (Windows or Linux) and make sure that the master target server you deploy has the same type of operating system. All VMs in a replication group must use the same master target server. 5. Open [the required ports](vmware-azure-prepare-failback.md#ports-for-reprotectionfailback) for failback. 6. Ensure that the vCenter Server is connected before failback. Otherwise, disconnecting disks and attaching them back to the virtual machine fails.
-7. If a vCenter server manages the VMs to which you'll fail back, make sure that you have the required permissions. If you perform a read-only user vCenter discovery and protect virtual machines, protection succeeds, and failover works. However, during reprotection, failover fails because the datastores can't be discovered, and aren't listed during reprotection. To resolve this problem, you can update the vCenter credentials with an [appropriate account/permissions](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery), and then retry the job.
+7. If a vCenter server manages the VMs to which you'll fail back, make sure that you have the required permissions. If you perform a read-only user vCenter discovery and protect virtual machines, protection succeeds, and failover works. However, during reprotection, failover fails because the data stores can't be discovered, and aren't listed during reprotection. To resolve this problem, you can update the vCenter credentials with an [appropriate account/permissions](vmware-azure-tutorial-prepare-on-premises.md#prepare-an-account-for-automatic-discovery), and then retry the job.
8. If you used a template to create your virtual machines, ensure that each VM has its own UUID for the disks. If the on-premises VM UUID clashes with the UUID of the master target server because both were created from the same template, reprotection fails. Deploy from a different template.
-9. If you're failing back to an alternate vCenter Server, make sure that the new vCenter Server and the master target server are discovered. Typically if they're not the datastores aren't accessible, or aren't visible in **Reprotect**.
+9. If you're failing back to an alternate vCenter Server, make sure that the new vCenter Server and the master target server are discovered. Typically if they're not the data stores aren't accessible, or aren't visible in **Reprotect**.
10. Verify the following scenarios in which you can't fail back: - If you're using either the ESXi 5.5 free edition or the vSphere 6 Hypervisor free edition. Upgrade to a different version. - If you have a Windows Server 2008 R2 SP1 physical server.
Enable replication. You can reprotect specific VMs, or a recovery plan:
### Before you start - After a VM boots in Azure after failover, it takes some time for the agent to register back to the configuration server (up to 15 minutes). During this time, you won't be able to reprotect and an error message indicates that the agent isn't installed. If this happens, wait for a few minutes, and then reprotect.-- If you want to fail back the Azure VM to an existing on-premises VM, mount the on-premises VM datastores with read/write access on the master target server's ESXi host.-- If you want to fail back to an alternate location, for example if the on-premises VM doesn't exist, select the retention drive and datastore that are configured for the master target server. When you fail back to the on-premises site, the VMware virtual machines in the failback protection plan use the same datastore as the master target server. A new VM is then created in vCenter.
+- If you want to fail back the Azure VM to an existing on-premises VM, mount the on-premises VM data stores with read/write access on the master target server's ESXi host.
+- If you want to fail back to an alternate location, for example if the on-premises VM doesn't exist, select the retention drive and data store that are configured for the master target server. When you fail back to the on-premises site, the VMware virtual machines in the failback protection plan use the same data store as the master target server. A new VM is then created in vCenter.
Enable reprotection as follows:
-1. Select **Vault** > **Replicated items**. Right-click the virtual machine that failed over, and then select **Re-Protect**. Or, from the command buttons, select the machine, and then select **Re-Protect**.
-2. Verify that the **Azure to On-premises** direction of protection is selected.
-3. In **Master Target Server** and **Process Server**, select the on-premises master target server and the process server.
-4. For **Datastore**, select the datastore to which you want to recover the disks on-premises. This option is used when the on-premises virtual machine is deleted, and you need to create new disks. This option is ignored if the disks already exist. You still need to specify a value.
+1. Select **Vault** > **Replicated items**. Right-click the virtual machine that failed over, and then select **Re-protect**. Or, from the command buttons, select the machine, and then select **Re-protect**.
+2. Verify that the **Azure to on-premises** direction of protection is selected.
+3. In **MASTER TARGET** and **PROCESS SERVER**, select the on-premises master target server and the process server.
+4. For **DATA STORE**, select the data store to which you want to recover the disks on-premises. This option is used when the on-premises virtual machine is deleted, and you need to create new disks. This option is ignored if the disks already exist. You still need to specify a value.
5. Select the retention drive. 6. The failback policy is automatically selected. 7. Select **OK** to begin reprotection.
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
When using a private endpoint the connection string is `myaccount.myuser@myaccou
## Networking considerations
-When using SFTP, you may want to limit public access through configuration of a firewall, virtual network, or private endpoint. These settings are enforced at the application layer, which means they are not specific to SFTP and will impact connectivity to all Azure Storage Endpoints. For more information on firewalls and network configuration, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md).
+SFTP is a platform level service, so port 22 will be open even if the account option is disabled. If SFTP access is not configured then all requests will receive a disconnect from the service. When using SFTP, may want to limit public access through configuration of a firewall, virtual network, or private endpoint. These settings are enforced at the application layer, which means they are not specific to SFTP and will impact connectivity to all Azure Storage Endpoints. For more information on firewalls and network configuration, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md).
> [!NOTE] > Audit tools that attempt to determine TLS support at the protocol layer may return TLS versions in addition to the minimum required version when run directly against the storage account endpoint. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](../common/transport-layer-security-configure-minimum-version.md).
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Now, with SFTP support for Azure Blob Storage, you can enable an SFTP endpoint f
This article describes SFTP support for Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md).
+> [!Note]
+> SFTP is a platform level service, so port 22 will be open even if the account option is disabled. If SFTP access is not configured then all requests will receive a disconnect from the service.
+ ## SFTP and the hierarchical namespace SFTP support requires hierarchical namespace to be enabled. Hierarchical namespace organizes objects (files) into a hierarchy of directories and subdirectories in the same way that the file system on your computer is organized. The hierarchical namespace scales linearly and doesn't degrade data capacity or performance.
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-enable.md
Previously updated : 05/05/2022 Last updated : 07/21/2022
Blob soft delete protects an individual blob and its versions, snapshots, and me
Blob soft delete is part of a comprehensive data protection strategy for blob data. To learn more about Microsoft's recommendations for data protection, see [Data protection overview](data-protection-overview.md).
-Blob soft delete is enabled by default for a new storage account. You can enable or disable soft delete for a storage account at any time by using the Azure portal, PowerShell, or Azure CLI.
- ## Enable blob soft delete
+You can enable or disable soft delete for a storage account at any time by using the Azure portal, PowerShell, or Azure CLI.
+ ### [Portal](#tab/azure-portal)
-To enable blob soft delete for your storage account by using the Azure portal, follow these steps:
+Blob soft delete is enabled by default when you create a new storage account with the Azure portal. The setting to enable or disable blob soft delete when you create a new storage account is on the **Data protection** tab. For more information about creating a storage account, see [Create a storage account](../common/storage-account-create.md).
+
+To enable blob soft delete for an existing storage account by using the Azure portal, follow these steps:
1. In the [Azure portal](https://portal.azure.com/), navigate to your storage account. 1. Locate the **Data Protection** option under **Data management**.
To enable blob soft delete for your storage account by using the Azure portal, f
### [PowerShell](#tab/azure-powershell)
-To enable blob soft delete with PowerShell, call the [Enable-AzStorageBlobDeleteRetentionPolicy](/powershell/module/az.storage/enable-azstorageblobdeleteretentionpolicy) command, specifying the retention period in days.
+Blob soft delete is not enabled when you create a new storage account with PowerShell. You can enable blob soft delete after the new account has been created.
+
+To enable blob soft delete for an existing storage account with PowerShell, call the [Enable-AzStorageBlobDeleteRetentionPolicy](/powershell/module/az.storage/enable-azstorageblobdeleteretentionpolicy) command, specifying the retention period in days.
The following example enables blob soft delete and sets the retention period to seven days. Remember to replace the placeholder values in brackets with your own values:
$properties.DeleteRetentionPolicy.Days
### [Azure CLI](#tab/azure-CLI)
-To enable blob soft delete with Azure CLI, call the [az storage account blob-service-properties update](/cli/azure/storage/account/blob-service-properties#az-storage-account-blob-service-properties-update) command, specifying the retention period in days.
+Blob soft delete is not enabled when you create a new storage account with Azure CLI. You can enable blob soft delete after the new account has been created.
+
+To enable blob soft delete for an existing storage account with Azure CLI, call the [az storage account blob-service-properties update](/cli/azure/storage/account/blob-service-properties#az-storage-account-blob-service-properties-update) command, specifying the retention period in days.
The following example enables blob soft delete and sets the retention period to seven days. Remember to replace the placeholder values in brackets with your own values:
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
This article lists updates to Azure Synapse Analytics that are published in Apri
## General
-* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](/cognitive-services/) models, AI models from partners, and bring-your-own-data models.
+* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](/azure/cognitive-services/) models, AI models from partners, and bring-your-own-data models.
* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available on Microsoft Docs. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md). ## SQL
This article lists updates to Azure Synapse Analytics that are published in Apri
* **Web Explorer sample gallery** - A great way to learn about a product is to see how it is being used by others. The Web Explorer sample gallery provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. Each sample includes the dataset, well-documented queries, and a sample dashboard. To learn more about the sample gallery, read [Azure Data Explorer in 60 minutes with the new samples gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552).
-* **Web Explorer dashboards drill through capabilities** - You can now add drill through capabilities to your Synapse Web Explorer dashboards. The new drill through capabilities allow you to easily jump back and forth between dashboard pages. This is made possible by using a contextual filter to connect your dashboards. Defining these contextual drill throughs is done by editing the visual interactions of the selected tile in your dashboard. To learn more about drill through capabilities, read [Use drillthroughs as dashboard parameters](/data-explorer/dashboard-parameters.md#use-drillthroughs-as-dashboard-parameters).
+* **Web Explorer dashboards drill through capabilities** - You can now add drill through capabilities to your Synapse Web Explorer dashboards. The new drill through capabilities allow you to easily jump back and forth between dashboard pages. This is made possible by using a contextual filter to connect your dashboards. Defining these contextual drill throughs is done by editing the visual interactions of the selected tile in your dashboard. To learn more about drill through capabilities, read [Use drillthroughs as dashboard parameters](/azure/data-explorer/dashboard-parameters#use-drillthroughs-as-dashboard-parameters).
-* **Time Zone settings for Web Explorer** - Being able to display data in different time zones is very powerful. You can now decide to view the data in UTC time, your local time zone, or the time zone of the monitored device/machine. The Time Zone settings of the Web Explorer now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. For more information on time zone settings, read [Change datetime to specific time zone](/data-explorer/web-query-data.md#change-datetime-to-specific-time-zone).
+* **Time Zone settings for Web Explorer** - Being able to display data in different time zones is very powerful. You can now decide to view the data in UTC time, your local time zone, or the time zone of the monitored device/machine. The Time Zone settings of the Web Explorer now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. For more information on time zone settings, read [Change datetime to specific time zone](/azure/data-explorer/web-query-data#change-datetime-to-specific-time-zone).
## Data integration
time-series-insights Concepts Streaming Ingress Throughput Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-streaming-ingress-throughput-limits.md
In general, ingress rates are viewed as the factor of the number of devices that
* **Number of devices** × **Event emission frequency** × **Size of each event**. By default, Azure Time Series Insights Gen2 can ingest incoming data at a rate of **up to 1 megabyte per second (MBps) or 1000 events stored per second per Azure Time Series Insights Gen2 environment**. There are additional limitations [per hub partition](./concepts-streaming-ingress-throughput-limits.md#hub-partitions-and-per-partition-limits). Depending on how you've modeled your data, arrays of objects can be split into multiple events stored: [How to know if my array of objects will produce multiple events
-](./concepts-json-flattening-escaping-rules#how-to-know-if-my-array-of-objects-will-produce-multiple-events).
+](./concepts-json-flattening-escaping-rules.md#how-to-know-if-my-array-of-objects-will-produce-multiple-events).
> [!TIP] >
For Event Hubs partitioning best practices, review [How many partitions do I nee
Whether you're creating a new hub for your Azure Time Series Insights Gen2 environment or using an existing one, you'll need to calculate your per partition ingestion rate to determine if it's within the limits. Azure Time Series Insights Gen2 currently has a general **per partition limit of 0.5 MBps or 500 events stored per second**. Depending on how you've modeled your data, arrays of objects can be split into multiple events stored: [How to know if my array of objects will produce multiple events
-](./concepts-json-flattening-escaping-rules#how-to-know-if-my-array-of-objects-will-produce-multiple-events).
+](./concepts-json-flattening-escaping-rules.md#how-to-know-if-my-array-of-objects-will-produce-multiple-events).
### IoT Hub-specific considerations
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=%2Fa
| Address | Outbound TCP port | Purpose | Client(s) | |--|--|--|--|
-| `*.wvd.microsoft.us` | 443 | Service traffic | All |
+| `*.wvd.azure.us` | 443 | Service traffic | All |
| `*.servicebus.usgovcloudapi.net` | 443 | Troubleshooting data | All | | `go.microsoft.com` | 443 | Microsoft FWLinks | All | | `aka.ms` | 443 | Microsoft URL shortener | All |
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
The Custom Script Extension for Windows will run on these supported operating sy
* Windows Server 2016 Core * Windows Server 2019 * Windows Server 2019 Core
+* Windows Server 2022
+* Windows Server 2022 Core
* Windows 11 ### Script location
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 07/07/2022 Last updated : 07/21/2022
If you're providing a backup solution for IaaS VMs in Azure, you should use dire
## Secure uploads with Azure AD (preview)
-> [!IMPORTANT]
-> If Azure AD is being used to enforce upload restrictions, you must use the Azure PowerShell module's [Add-AzVHD command](../windows/disks-upload-vhd-to-managed-disk-powershell.md#secure-uploads-with-azure-ad-preview) to upload a disk. Azure CLI doesn't currently support uploading a disk if Azure AD is being used to enforce upload restrictions.
+If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is currently in preview. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level, to ensure that an Azure AD identity has the necessary permissions for uploading before allowing a disk or a disk snapshot to be uploaded. If you have any questions on securing uploads with Azure AD, reach out to this email: azuredisks@microsoft .com
+
+### Prerequisites
+- [Install the Azure CLI](/cli/azure/install-azure-cli).
+- Use the following command to enable the preview on your subscription:
+ ```azurecli
+ az feature register --name AllowAADAuthForDataAccess --namespace Microsoft.Compute
+ ```
+
+ It may take some time for the feature registration to complete, you can confirm if it has with the following command:
+
+ ```azurecli
+ az feature show --name AllowAADAuthForDataAccess --namespace Microsoft.Compute --output table
+ ```
+
+### Restrictions
+
+### Assign RBAC role
+
+To access managed disks secured with Azure AD, the requesting user must have either the [Data Operator for Managed Disks](../../role-based-access-control/built-in-roles.md#data-operator-for-managed-disks) role, or a [custom role](../../role-based-access-control/custom-roles-powershell.md) with the following permissions:
+
+- **Microsoft.Compute/disks/download/action**
+- **Microsoft.Compute/disks/upload/action**
+- **Microsoft.Compute/snapshots/download/action**
+- **Microsoft.Compute/snapshots/upload/action**
+
+For detailed steps on assigning a role, see [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md). To create or update a custom role, see [Create or update Azure custom roles using Azure CLI](../../role-based-access-control/custom-roles-cli.md).
-If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is currently in preview. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level, to ensure that all disks and snapshots must use Azure AD for uploading.
## Get started
az disk create -n <yourdiskname> -g <yourresourcegroupname> -l <yourregion> --os
If you would like to upload either a premium SSD or a standard SSD, replace **standard_lrs** with either **premium_LRS** or **standardssd_lrs**. Ultra disks are not supported for now.
+### (Optional) Grant access to the disk
+
+If you're using Azure AD to secure uploads, you'll need to [assign RBAC permissions](../../role-based-access-control/role-assignments-cli.md) to grant access to the disk and generate a writeable SAS.
+
+```azurecli
+az role assignment create --assignee "{assignee}" \
+--role "{Data Operator for Managed Disks}" \
+--scope "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/{providerName}/{resourceType}/{resourceSubType}/{diskName}"
+```
+ ### Generate writeable SAS Now that you've created an empty managed disk that is configured for the upload process, you can upload a VHD to it. To upload a VHD to the disk, you'll need a writeable SAS, so that you can reference it as the destination for your upload.
virtual-machines Vm Applications How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications-how-to.md
Title: Create and deploy VM application packages (preview)
+ Title: Create and deploy VM application packages
description: Learn how to create and deploy VM Applications using an Azure Compute Gallery.
-# Create and deploy VM Applications (preview)
+# Create and deploy VM Applications
VM Applications are a resource type in Azure Compute Gallery (formerly known as Shared Image Gallery) that simplifies management, sharing and global distribution of applications for your virtual machines. > [!IMPORTANT]
-> **VM applications in Azure Compute Gallery** are currently in public preview.
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Deploying **VM applications in Azure Compute Gallery** do not currently support using Azure policies.
## Prerequisites
Set a VM application to an existing VM using [az vm application set](/cli/azure/
az vm application set \ --resource-group myResourceGroup \ --name myVM \app-version-ids /subscriptions/{subID}/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/galleries/myGallery/applications/myApp/versions/1.0.0 \
+ --app-version-ids /subscriptions/{subID}/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/galleries/myGallery/applications/myApp/versions/1.0.0 \
+ --treat-deployment-as-failure true
```
+For setting multiple applications on a VM:
+
+```azurecli-interactive
+az vm applicaction set \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --app-version-ids <appversionID1> <appversionID2> \
+ --treat-deployment-as-failure true
+```
+To verify application VM deployment status:
+
+```azurecli-interactive
+az vm get-instance-view -g myResourceGroup -n myVM --query "instanceView.extensions[?name == 'VMAppExtension']"
+```
+For verifying application VMSS deployment status:
+
+```azurecli-interactive
+$ids = az vmss list-instances -g myResourceGroup -n $vmssName --query "[*].{id: id, instanceId: instanceId}" | ConvertFrom-Json
+$ids | Foreach-Object {
+ $iid = $_.instanceId
+ Write-Output "instanceId: $iid"
+ az vmss get-instance-view --ids $_.id --query "extensions[?name == 'VMAppExtension']"
+}
+```
+> [!NOTE]
+> The VMSS deployment status contains PowerShell syntax. Refer to the 2nd [vm-extension-delete](/cli/azure/vm/extension#az-vm-extension-delete-examples) example as there is precedence for it.
+ ### [PowerShell](#tab/powershell)
New-AzGalleryApplicationVersion `
-GalleryApplicationName $applicationName ` -Name $version ` -PackageFileLink "https://<storage account name>.blob.core.windows.net/<container name>/<filename>" `
+ -DefaultConfigFileLink "https://<storage account name>.blob.core.windows.net/<container name>/<filename>" `
-Location "East US" ` -Install "mv myApp .\myApp\myApp" ` -Remove "rm .\myApp\myApp" `
$applicationName = "myApp"
$vmName = "myVM" $vm = Get-AzVM -ResourceGroupName $rgname -Name $vmName $appversion = Get-AzGalleryApplicationVersion `
- -GalleryApplicationName $applicationname `
- -GalleryName $galleryname `
+ -GalleryApplicationName $applicationName `
+ -GalleryName $galleryName `
-Name $version `
- -ResourceGroupName $rgname
+ -ResourceGroupName $rgName
$packageid = $appversion.Id $app = New-AzVmGalleryApplication -PackageReferenceId $packageid
-Add-AzVmGalleryApplication -VM $vm -GalleryApplication $app
-Update-AzVM -ResourceGroupName $rgname -VM $vm
+Add-AzVmGalleryApplication -VM $vm -GalleryApplication $app -TreatFailureAsDeploymentFailure true
+Update-AzVM -ResourceGroupName $rgName -VM $vm
``` Verify the application succeeded: ```powershell-interactive
-Get-AzVM -ResourceGroupName $rgname -VMName $vmname -Status
+$rgName = "myResourceGroup"
+$vmName = "myVM"
+$result = Get-AzVM -ResourceGroupName $rgName -VMName $vmName -Status
+$result.Extensions | Where-Object {$_.Name -eq "VMAppExtension"} | ConvertTo-Json
``` ### [REST](#tab/rest2)
PUT
{ "order": 1, "packageReferenceId": "/subscriptions/{subscriptionId}/resourceGroups/<resource group>/providers/Microsoft.Compute/galleries/{gallery name}/applications/{application name}/versions/{version}",
- "configurationReference": "{path to configuration storage blob}"
+ "configurationReference": "{path to configuration storage blob}",
+ "treatFailureAsDeploymentFailure": false
} ] }
virtualMachineScaleSets/\<**VMSSName**\>?api-version=2019-03-01
{ "order": 1, "packageReferenceId": "/subscriptions/{subscriptionId}/resourceGroups/<resource group>/providers/Microsoft.Compute/galleries/{gallery name}/applications/{application name}/versions/{version}",
- "configurationReference": "{path to configuration storage blob}"
+ "configurationReference": "{path to configuration storage blob}",
+ "treatFailureAsDeploymentFailure": false
} ] }
virtualMachineScaleSets/\<**VMSSName**\>?api-version=2019-03-01
| order | Optional. The order in which the applications should be deployed. See below. | Validate integer | | packageReferenceId | A reference the gallery application version | Valid application version reference | | configurationReference | Optional. The full url of a storage blob containing the configuration for this deployment. This will override any value provided for defaultConfiguration earlier. | Valid storage blob reference |
+| treatFailureAsDeploymentFailure | Optional. Provisioning status for VM App. When set to false, provisioning status will always show 'succeeded' regardless of app deployment failure. | True or False
The order field may be used to specify dependencies between applications. The rules for order are the following:
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
Title: Overview of VM Applications in the Azure Compute Gallery (preview)
+ Title: Overview of VM Applications in the Azure Compute Gallery
description: Learn more about VM application packages in an Azure Compute Gallery.
-# VM Applications overview (preview)
+# VM Applications overview
VM Applications are a resource type in Azure Compute Gallery (formerly known as Shared Image Gallery) that simplifies management, sharing, and global distribution of applications for your virtual machines. > [!IMPORTANT]
-> **VM applications in Azure Compute Gallery** are currently in public preview.
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Deploying **VM applications in Azure Compute Gallery** do not currently support using Azure policies.
While you can create an image of a VM with apps pre-installed, you would need to update your image each time you have application changes. Separating your application installation from your VM images means thereΓÇÖs no need to publish a new image for every line of code change.
The install/update/remove commands should be written assuming the application pa
## File naming
-During the preview, when the application file gets downloaded to the VM, the file name is the same as the name you use when you create the VM application. For example, if I name my VM application `myApp`, the file that will be downloaded to the VM will also be named `myApp`, regardless of what the file name is used in the storage account. If your VM application also has a configuration file, that file is the name of the application with `_config` appended. If `myApp` has a configuration file, it will be named `myApp_config`.
+When the application file gets downloaded to the VM, the file name is the same as the name you use when you create the VM application. For example, if I name my VM application `myApp`, the file that will be downloaded to the VM will also be named `myApp`, regardless of what the file name is used in the storage account. If your VM application also has a configuration file, that file is the name of the application with `_config` appended. If `myApp` has a configuration file, it will be named `myApp_config`.
For example, if I name my VM application `myApp` when I create it in the Gallery, but it's stored as `myApplication.exe` in the storage account, when it gets downloaded to the VM the file name will be `myApp`. My install string should start by renaming the file to be whatever it needs to be to run on the VM (like myApp.exe).
-The install, update, and remove commands must be written with file naming in mind.
+The install, update, and remove commands must be written with file naming in mind. The `configFileName` is assigned to the config file for the VM and `packageFileName` is the name assigned downloaded package on the VM. For more information regarding these additional VM settings, refer to [UserArtifactSettings](https://docs.microsoft.com/rest/api/compute/gallery-application-versions/create-or-update?tabs=HTTP#userartifactsettings) in our API docs.
## Command interpreter
Example remove command:
``` rmdir /S /Q C:\\myapp ```
+## Treat failure as deployment failure
+The VM application extension always returns a *success* regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension will only report the extension status as failure when there's a problem with the extension or the underlying infrastructure. This is triggered by the "treat failure as deployment failure" flag which is set to `$false` by default and can be changed to `$true`. The failure flag can be configured in [PowerShell](/powershell/module/az.compute/add-azvmgalleryapplication#-treatfailureasdeploymentfailure) or [CLI](/cli/azure/vm/application#-treat-deployment-as-failure).
-## Troubleshooting during preview
+## Troubleshooting VM Applications
-During the preview, the VM application extension always returns a success regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension will only report the extension status as failure when there's a problem with the extension or the underlying infrastructure. To know whether a particular VM application was successfully added to the VM instance, check the message of the VM Application extension.
+To know whether a particular VM application was successfully added to the VM instance, check the message of the VM Application extension.
To learn more about getting the status of VM extensions, see [Virtual machine extensions and features for Linux](extensions/features-linux.md#view-extension-status) and [Virtual machine extensions and features for Windows](extensions/features-windows.md#view-extension-status).
Get-AzVM -name <VM name> -ResourceGroupName <resource group name> -Status | conv
To get status of scale set extensions, use [Get-AzVMSS](/powershell/module/az.compute/get-azvmss): ```azurepowershell-interactive
-Get-AzVmss -name <VMSS name> -ResourceGroupName <resource group name> -Status | convertto-json -Depth 10
+$result = Get-AzVmssVM -ResourceGroupName $rgName -VMScaleSetName $vmssName -InstanceView
+$resultSummary = New-Object System.Collections.ArrayList
+$result | ForEach-Object {
+ $res = @{ instanceId = $_.InstanceId; vmappStatus = $_.InstanceView.Extensions | Where-Object {$_.Name -eq "VMAppExtension"}}
+ $resultSummary.Add($res) | Out-Null
+}
+$resultSummary | convertto-json -depth 5
``` ## Error messages
virtual-machines Disks Upload Vhd To Managed Disk Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
If you're providing a backup solution for IaaS VMs in Azure, you should use dire
## Secure uploads with Azure AD (preview)
-If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is currently in preview. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level, to ensure that all disks and snapshots must use Azure AD for uploading. If you have any questions on securing uploads with Azure AD, reach out to this email: azuredisks@microsoft .com
+If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is currently in preview. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level to ensure that an Azure AD identity has the necessary permissions for uploading before allowing a disk or a disk snapshot to be uploaded. If you have any questions on securing uploads with Azure AD, reach out to this email: azuredisks@microsoft .com
### Prerequisites [!INCLUDE [disks-azure-ad-upload-download-prereqs](../../../includes/disks-azure-ad-upload-download-prereqs.md)]
For guidance on how to copy a managed disk from one region to another, see [Copy
### Upload a VHD
- > [!IMPORTANT]
-> If Azure AD is being used to enforce upload restrictions, you must use Add-AzVHD to upload a disk. The manual upload process isn't currently supported.
- ### (Optional) Grant access to the disk If Azure AD is used to enforce upload restrictions on a subscription or at the account level, [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) only succeeds if attempted by a user that has the [appropriate RBAC role or necessary permissions](#assign-rbac-role). You'll need to [assign RBAC permissions](../../role-based-access-control/role-assignments-powershell.md) to grant access to the disk and generate a writeable SAS.
New-AzRoleAssignment -SignInName <emailOrUserprincipalname> `
The following example uploads a VHD from your local machine to a new Azure managed disk using [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true). Replace `<your-filepath-here>`, `<your-resource-group-name>`,`<desired-region>`, and `<desired-managed-disk-name>` with your parameters: > [!NOTE]
-> If you're using Auzre AD to enforce upload restrictions, add `DataAccessAuthMode 'AzureActiveDirectory'` to the end of your `Add-AzVhd` command.
+> If you're using Azure AD to enforce upload restrictions, add `DataAccessAuthMode 'AzureActiveDirectory'` to the end of your `Add-AzVhd` command.
```azurepowershell # Required parameters
Replace `<yourdiskname>`, `<yourresourcegroupname>`, and `<yourregion>` then run
> [!TIP] > If you're creating an OS disk, add `-HyperVGeneration '<yourGeneration>'` to `New-AzDiskConfig`.
+>
+> If you're using Azure AD to secure your uploads, add `-dataAccessAuthMode 'AzureActiveDirectory'` to `New-AzDiskConfig`.
```powershell $vhdSizeBytes = (Get-Item "<fullFilePathHere>").length
virtual-machines Oracle Oci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-oci-overview.md
Cross-cloud connectivity is limited to the following regions:
* Azure West US 3 & OCI US West ((Phoenix) * Azure Korea Central region & OCI South Korea Central (Seoul) * Azure Southeast Asia region & OCI Singapore (Singapore)
+* Azure Brazil South (BrazilSouth) & OCI Vinhedo (Brazil Southeast)
## Networking
virtual-machines Oracle Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-overview.md
Run enterprise applications in Azure on supported Oracle Linux images. The follo
* Configure [Oracle Data Guard](https://docs.oracle.com/cd/B19306_01/server.102/b14239/concepts.htm#g1049956), [Active Data Guard with FSFO](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dgbkr/https://docsupdatetracker.net/index.html), [Sharding](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/admin/sharding-overview.html) or [Golden Gate](https://www.oracle.com/middleware/technologies/goldengate.html) on Azure infrastructure in conjunction with [Availability Zones](../../../availability-zones/az-overview.md) for high availability in-region. You may also setup these configurations across multiple Azure regions for added availability and disaster recovery.
-* Use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) to orchestrate and manage disaster recovery for your Oracle Linux VMs in Azure and your physical servers.
+* Use [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) to orchestrate and manage disaster recovery for your Oracle Linux VMs in Azure and your physical servers in conjunction with Oracle DataGuard or Oracle consistent backup measures that meet the Recovery Point Objective and Recovery Time Objective (RPO/RTO). Azure Site Recovery has a [block change limit](../../../site-recovery/azure-to-azure-support-matrix.md) for the storage used by Oracle database.
## Backup Oracle Workloads
virtual-machines Expose Sap Process Orchestration On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure.md
Dispatching approaches range from traditional reverse proxies like Apache, to Pl
## Primary Azure services used
-[Azure Application Gateway](../../../application-gateway/how-application-gateway-works.md) handles public [internet-based](../../../application-gateway/configuration-front-end-ip.md) and/or [internal private](../../../application-gateway/configuration-front-end-ip.md) http routing and [encrypted tunneling across Azure subscriptions](../../../application-gateway/private-link.md), [security](../../../application-gateway/features.md), and [auto-scaling](../../../application-gateway/application-gateway-autoscaling-zone-redundant.md) for instance. Workloads in other virtual networks (VNet) or even Azure Subscriptions that shall communicate with SAP through the app gateway can be connected via [private links](../../../application-gateway/private-link-configure.md). Azure Application Gateway is focused on exposing web applications, hence offers a Web Application Firewall.
+[Azure Application Gateway](../../../application-gateway/how-application-gateway-works.md) handles public [internet-based](../../../application-gateway/configuration-front-end-ip.md) and/or [internal private](../../../application-gateway/configuration-front-end-ip.md) http routing and [encrypted tunneling across Azure subscriptions](../../../application-gateway/private-link.md), [security](../../../application-gateway/features.md), and [auto-scaling](../../../application-gateway/application-gateway-autoscaling-zone-redundant.md) for instance. Azure Application Gateway is focused on exposing web applications, hence offers a Web Application Firewall. Workloads in other virtual networks (VNet) that shall communicate with SAP through the Azure Application Gateway can be connected via [private links](../../../application-gateway/private-link-configure.md) even cross-tenant.
+ [Azure Firewall](../../../firewall/overview.md) handles public internet-based and/or internal private routing for traffic types on Layer 4-7 of the OSI model. It offers filtering and threat intelligence, which feeds directly from Microsoft Cyber Security.
virtual-machines Monitor Sap On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/monitor-sap-on-azure.md
You can monitor different components of an SAP landscape, such as Azure virtual
The following table provides a quick comparison of the Azure Monitor for SAP solutions (classic) and Azure Monitor for SAP solutions.
-classic
| Azure Monitor for SAP solutions | Azure Monitor for SAP solutions (classic) | | - | -- | | Azure Functions-based collector architecture | VM-based collector architecture |
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
Before you decommission a custom IP prefix, ensure it has no public IP prefixes
To migrate a custom IP prefix, it must first be deprovisioned from one region. A new custom IP prefix with the same CIDR can then be created in another region.
+### Status messages
+
+When onboarding or removing a custom IP prefix from Azure, the **FailedReason** attribute of the resource will be updated. If the Azure portal is used, the message will be shown as a top-level banner. The following tables list the status messages when onboarding or removing a custom IP prefix.
+
+#### Validation failures
+
+| Failure message | Explanation |
+| | -- |
+| CustomerSignatureNotVerified | The signed message cannot be verified against the authentication message using the Whois/RDAP record for the prefix. |
+| NotAuthorizedToAdvertiseThisPrefix </br> or </br> ASN8075NotAllowedToAdvertise | ASN8075 is not authorized to advertise this prefix. Make sure your route origin authorization (ROA) is submitted correctly. Verify ROA. |
+| PrefixRegisteredInAfricaAndSouthAmericaNotAllowedInOtherRegion | IP prefix is registered with AFRINIC or LACNIC. This prefix is not allowed to be used outside Africa/South America. |
+| NotFindRoutingRegistryToGetCertificate | Cannot find the public key for the IP prefix using the registration data access protocol (RDAP) of the regional internet registry (RIR). |
+| CIDRInAuthorizationMessageNotMatchCustomerIP | The CIDR in the authorization message does not match the submitted IP address. |
+| ExpiryDateFormatInvalidOrNotInThefuture | The expiration date provided in the authorization message is in the wrong format or expired. Expected format is yyyymmdd. |
+| AuthMessageFormatInvalid | Authorization message format is not valid. Expected format is xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx1.2.3.0/24yyyymmdd. |
+| CannotParseValidCertificateFromRIRPage | Cannot parse the public key for the IP prefix using the registration data access protocol (RDAP) of the regional internet registry (RIR). |
+| ROANotFound | Unable to find route origin authorization (ROA) for validation. |
+| CertFromRIRPageExpired | The public key provided by the registration data access protocol (RDAP) of the regional internet registry (RIR) is expired. |
+| InvalidPrefixLengthInROA | The prefix length provided does not match the prefix in the route origin authorization (ROA). |
+| RIRNotSupport | Only prefixes registered at ARIN, RIPE, APNIC, AFRINIC, and LACNIC are supported. |
+| InvalidCIDRFormat | The CIDR format is not valid. Expected format is 10.10.10.0/16. |
+| InvalidCIDRFormatInAuthorizationMessage | The format of the CIDR in the authorization message is not valid. Expected format is xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx1.2.3.0/24yyyymmdd. |
+| OperationFailedPleaseRetryLaterOrContactSupport | Unknown error. Contact support. |
+
+> [!NOTE]
+> Not all the messages shown during the commissioning or decommissioning process indicate failure-- some simply provide more granular status.
+
+#### Commission status
+
+| Status message | Explanation |
+| | -- |
+| RegionalCommissioningInProgress | The range is being commissioned to advertise regionally within Azure. |
+| InternetCommissioningInProgress | The range is now advertising regionally within Azure and is being commissioned to advertise to the internet. |
+
+#### Decommission status
+
+| Status message | Explanation |
+| -- | -- |
+| InternetDecommissioningInProgress | The range is currently being decommissioned. The range will no longer be advertised to the internet. |
+| RegionalDecommissioningInProgress | The range is no longer advertised to the internet and is currently being decommissioned. The range will no longer be advertised regionally within Azure. |
+
+#### Commission failures
+
+| Failure message | Explanation |
+| | -- |
+| CommissionFailedRangeNotAdvertised | The range was unable to be advertised regionally within Azure or to the internet. |
+| CommissionFailedRangeRegionallyAdvertised | The range was unable to be advertised to the internet but is being advertised within Azure. |
+| CommissionFailedRangeInternetAdvertised | The range was unable to be advertised optimally but is being advertised to the internet and within Azure. |
+
+#### Decommission failures
+
+| Failure message | Explanation |
+| | -- |
+| DecommissionFailedRangeInternetAdvertised | The range was unable to be decommissioned and is still advertised to the internet and within Azure. |
+| DecommissionFailedRangeRegionallyAdvertised | The range was unable to be decommissioned and is still advertised within Azure but is no longer advertised to the internet. |
+ ## Next steps - To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md).
web-application-firewall Manage Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/manage-policies.md
To navigate to WAF policies, select the **Web Application Firewall Policies** ta
:::image type="content" source="../media/manage-policies/policies.png" alt-text="Screenshot showing Web Application Firewall policies in Firewall Manager." lightbox="../media/manage-policies/policies.png":::
-## Associate of dissociate WAF policies
+## Associate or dissociate WAF policies
In Azure Firewall Manager, you can create and view all WAF policies in your subscriptions. These policies can be associated or dissociated with an application delivery platform. Select the service and then select **Manage Security**.